This week in tech: 20.04.2026
Summary of AI developments - made for busy people
APPLICATIONS
This has serious Grievance Studies vibe, updated for the ChatGPT era: an article in “Nature” describes a hoax study where researchers invented a fake condition (“bixonimania”) and submitted obviously fabricated papers filled with absurd clues. The experiment exposes - AGAIN - how utterly broken the publishing ecosystem is.
https://www.nature.com/articles/d41586-026-01100-y
Google is back in the VLM game:
TIPSv2 unique pitch is strong spatial understanding
Produces spatially aware features that align text with specific image regions → enabling zero-shot segmentation and depth estimation.
Combines contrastive learning with masked image modeling, using synthetic captions for richer spatial signals and better dense task performance.
Integrated with transformers, allowing easy extraction of both global embeddings and per-patch features.
HF model page: https://huggingface.co/google/tipsv2-b14
Repo: https://github.com/google-deepmind/tips
Paper: https://arxiv.org/abs/2410.16512
Turns out Europe can actually into AI:
Spanish legislation is on GitHub, (nice)
France has created an MCP server for its data portal and a Claude skill for people to do their taxes (WTF)
Estonia has a “state-created, AI-based digital assistant that helps institutions deliver modern and efficient customer service”
Spain: https://github.com/legalize-dev/legalize-es
Estonia: https://www.kratid.ee/en/burokratt
France: https://www.data.gouv.fr/reuses/paperasse-skills-ia-pour-la-comptabilite-et-fiscalite-francaise
TL;DR Obliteratus - one click to remove LLM censorship:
Toolkit identifies refusal weights, eliminates them
The model keeps working w/out guardrails
Local models are now uncensorable - good luck with the safety filters.
Repo: https://github.com/elder-plinius/OBLITERATUS
Anthropic (1):
Big business in bed with big government - what could possibly go wrong? The White House is telling major banks to test Anthropic’s Claude Mythos (a.k.a. GPT-2 redux - the one too dangerous for public consumption) for cybersecurity tasks. The initiative is also backed by the Federal Reserve, which strongly suggests that the fallout between Anthropic and the Pentagon was not about principles of Amodei and his minions, but a straightforward negotiation tactic.
Btw: the same model that’s too dangerous to be released to the public? It’s perfectly fine to hand it to the military industrial complex.
https://letsdatascience.com/news/white-house-directs-banks-to-use-anthropic-mythos-6376ac83
Anthropic (2)
:Anthropic seems to spreading on both sides on the Atlantic: Claude Mythos Preview is the first model shown to complete a full simulated corporate network attack, according to the UK AI Security Institute. This is a major leap forward in AI cyber capabilities: the model achieved a 73pct success rate on expert-level tasks and demonstrated the ability to discover and exploit vulnerabilities across major systems.
https://tech.yahoo.com/ai/claude/articles/claude-mythos-cracks-73-expert-102502316.html
BUSINESS
I’m not saying it’s Dot-com bubble 2.0 - but the signs are clearly there.
https://uk.finance.yahoo.com/news/really-means-failing-shoe-brand-132900745.html
Anthropic (3):
There does seem to be a method to Anthropic madness (omg omg ai will kill us all this is too dangerous to the public only we can be trusted) - if a sinister one:
First they come out against open-source AI
Scream from the rooftops that only “experts” (like them) should control AI
Go after Githup repositories for allegedly “leaking their source code” (= taking apart their own leak and reproducing it)
Forcing new Claude users to verify their identity, using actual ID
Is it just me, or is there a pattern?
https://support.claude.com/en/articles/14328960-identity-verification-on-claude
Two economists are walking in a forest when they come across a pile of shit. The first economist says to the other “I’ll pay you 100 dollars to eat that pile of shit.” The second economist takes the money and eats the pile of shit. They continue walking until they come across a second pile of shit. The second economist turns to the first and says “I’ll pay you 100 dollars to eat that pile of shit.” The first economist takes the money and eats a pile of shit. Walking a little more, the first economist looks at the second and says, “You know, I gave you 100 dollars to eat shit, then you gave me back the same 100 dollars to eat shit. I can’t help but feel like we both just ate shit for nothing.”
“That’s not true”, responds the second economist. “We increased the GDP by 200 dollars!”
In completely unrelated news, EU committees vote yes to pause the AI Act.
Remember the moral panic about Grok allowing users to nudify photos? It smelled then (X is not even in the top 5 of the worst offenders when it comes to CSAM), and it smells even more now: it turns out that Apple and Google have directed users to dozens of nudifying apps, which have earned $122 million and were downloaded 483 million times.
And yet nary a peep from the self-appointed guardians of morality. Gee, I wonder why.
CUTTING EDGE
Ai2 just released WildDet3D: an open source model for open-vocabulary 3D object detection:
1.2B-parameter model, works from a single RGB image using text, box, or point prompts.
Estimates camera intrinsics automatically (and optionally incorporating depth data for better localization).
It outputs 2D boxes, 3D boxes, and depth maps in a single pass
HF model page: https://huggingface.co/allenai/WildDet3D
HF dataset page: https://huggingface.co/datasets/allenai/WildDet3D-Data
FRINGE
The future is here:
“For the first time in the war, an enemy position was captured entirely by ground robotic systems and drones - without any infantry. A robot entered the most dangerous zones instead of a soldier and took the positions”
https://www.politico.eu/article/volodymyr-zelenskyy-robotic-systems-russia-army-positions-ukraine/
Big Tech now wants you to trade real relationships for digital ones - think: r/chatgpt writ large.
https://www.salon.com/2026/03/20/big-tech-wants-you-to-give-up-on-dating-humans/
Faith is about having a personal relationship with Jesus - at least according to the Christian denominations that are big in the US. At USD 1.99 per minute, a tech startup Just Like Me is taking that concept to a new level: users can join video calls with an avatar of Jesus. The AI-powered blasphemy offers words of prayer and encouragement in various languages remembers previous conversations.
Say hello to Buddy Christ.
Anthropic invited Christian leaders (clergy and laymen) to discuss topics like whether Claude could be considered a child of God. As a Catholic, I would like to apologize for our mistake: we never should have disbanded the Inquisition - back in the day, such “leaders” would have been handled internally and permanently.
We are sorry.
https://www.washingtonpost.com/technology/2026/04/11/anthropic-christians-claude-morals/
The most naturally charismatic CEO out there, Mark Zuckerberg, is getting an avatar. Meta is creating an AI version of Mark Zuckerberg, so the great unwashed can talk to their glorious leader.
Don’t get me wrong, MZ has always triggered uncanny valley vibes in yours truly - and with the amount of data Meta has, chances our the avatar will act more natural than the original - but employees talking to a hallucinating LLM? The memes always make themselves.
https://www.theguardian.com/technology/2026/apr/13/meta-ai-mark-zuckerberg-staff-talk-to-the-boss
RESEARCH
First we got transformers for tabular data, now diffusion is here for time series: the study adapts tabular diffusion models for sequential data by introducing temporal adapters and sequence-aware embeddings. The model generates highly realistic synthetic sensor data that maintains temporal dependencies and autocorrelation patterns - rather handy, if you want to augment your data and preserve privacy.
Paper: https://arxiv.org/abs/2604.05257
Adversarial attacks on time series models are getting less love and attention than the classification kind - but that’s not to say they don’t happen. This paper investigates the vulnerability of deep State-Space Models: it establishes theoretical bounds for forecasting errors and proposes a robust design framework to defend against stealthy perturbations.
Paper: https://arxiv.org/abs/2604.03427
Oh this is good: a new paper shows that a simple method called “context parroting” (= fancy term for matching patterns in recent time-series data) can outperform several advanced forecasting models on certain tasks. The approach works as a surprisingly strong zero-shot baseline and performs especially well on complex systems like chaotic dynamics. Deciding whether it says something important about the “SOTA” models or about the benchmarks? That’s left as an exercise to the reader.
Paper: https://openreview.net/forum?id=EUAXc9Hlvm





