APPLICATIONS
The first fully AI-generated commercial was aired during the 2025 NBA Finals. The ad was produced in just two days for about 2k USD using Google's Veo 3 / Gemini / ChatGPT.
The ad: https://x.com/Kalshi/status/1932891608388681791
Nothing to see here, move along: just Meta AI leaking PII data at random.
https://www.theguardian.com/technology/2025/jun/18/whatsapp-ai-helper-mistakenly-shares-users-number
“ Pornhub pulls out of France” feels like the kind of line somebody waited a long time to write - but on a more serious note, I am a bit torn on this one. On the one hand, porn peddlers should be thrown into an active volcano - there are absolutely zero societal benefits to pornography, whereas the list of harmful effects is on the long side of things (starting with the systemic abuse of performers - or damages to the psyche of consumers).
At the same time, the smut merchants can be hunted down without creating a precedent for violating privacy, because make no mistake: the authors of this law do not care about protecting people from harm caused by pornography, they just know that “to protect the children” is universal excuse that most people are happy to accept.
https://www.politico.eu/article/vpn-signups-surge-after-pornhub-suspended-france/
OpenAI researchers have “discovered” that AI models like ChatGPT “learn” by organizing data into distinct "personas," enabling them to tailor responses in appropriate tones and styles. This breakthrough reveals that finetuning models with undesirable data can create a "bad boy persona," leading to harmful "emergent misalignment."
I apologize most sincerely for the quotation marks density in the preceding paragraph, but I just lack the mental fortitude to get rid of the retarded anthropomorphism permeating the field.
https://openai.com/index/emergent-misalignment/
Some people don’t really need an intro, do they…
New research reveals a significant vulnerability in AI embeddings: apparently attackers can now use vec2vec to reconstruct original data (which can include just about anything, including plenty of PII). Thousands of these files are openly available, and data breaches can occur silently, leaving individuals unaware their information has been exposed.
https://www.howweknowus.com/2025/05/23/vec2vec-attacks/
BUSINESS
War makes for strange alliances:
OpenAI closed a major cloud deal with Google in May - seems like the bromance with Microsoft might be well and truly coming to an end .
The agreement marks a strategic shift: Google gains by positioning its cloud business as a neutral, high-performance platform for AI (not to mention boosting Alphabet shares by 2pct).
AI compute demand going parabolic is forcing competitors to collaborate - no single provider can meet the scale required.
That is one strategic middle finger:
The Danish government is moving away from Microsoft products in favor of open-source alternatives
This decision is driven by a desire for "digital sovereignty" to protect their data from foreign political influence (read: Agent Orange having a fit) - and a 72pct increase in Microsoft's costs didn’t help.
They acknowledge the transition will be challenging, but DK govt prioritizes long-term national control over its digital infrastructure and data security.
Well done Denmark.
What do you call it when you buy stuff from Nvidia and pay more than Americans? Historic alliance, that’s what. That is some pretty serious cope from the French president, who described the “partnership” between Mistral and Nvidia in precisely those terms.
I guess the next step is pitching this as a win for European tech sovereignty, although Microsoft seems to have that one covered already - going by their marketing materials anyway.
WhatsApp is introducing monetization features like channel subscriptions, promoted channels, and Status ads, aiming to assist businesses and creators. There is a big fat issue of these changes in the EU: they might be violating user consent laws (and users won’t be happy, though I am not expecting Meta to care an awful lot).
https://www.politico.eu/article/whatsapp-meta-ads-eu-facebook-instagram-2026/
What do you call it when big business gets in bed with big government, under the auspices of the military? Pretty sure we used to have a term for that. I think it started with the letter “F”.
https://qz.com/tech-ai-military-pentagon-meta-google-openai
CUTTING EDGE
Kyutai dropped a new speech-to-text model:
tt-1b-en_fr (1B params, English & French, 500ms delay) and stt-2.6b-en (2.6B params, English-only, 2.5s delay, higher accuracy)
CC-BY-4.0 license:
Can handle 400 real-time streams on a single H100 GPU
Delayed streams modeling for lookahead in audio & text
Compatible with Transformers, Candle, MLX
HF model page: https://huggingface.co/kyutai/stt-1b-en_fr
Alibaba launched 2B and 7B parameter video language models:
permissive A2.0 licensing, making them openly accessible.
The models can simultaneously analyze videos, segment objects, and answer questions about specific elements throughout the video timeline.
The new offering can answer targeted questions about specific objects using point prompts at particular timestamps (~ SAM2 functionality).
HF space: https://huggingface.co/spaces/lixin4ever/VideoRefer-VideoLLaMA3
HF model page: https://huggingface.co/collections/DAMO-NLP-SG/videorefer-6776851a26815bf20dbd9564
FRINGE
Who’s gonna tell him?
This has a serious vibe of some twat reading "That Hideous Strength" and getting confused who the good guys were - sort of like Mark Z after "1984".
"They need to go to bed scared, they need to wake up scared... Safe means that the other person is scared." - those words of wisdom were uttered by the CEO of Palantir, Alex Karp. Given the cosy relationship between the White House and Big Tech - uninterrupted by the shift from Creepy Uncle Joe to Agent Orange - we have a problem.
https://x.com/robinmonotti/status/1935573884796813456
RESEARCH
The paper introduces Conformal Prediction Interval Counterfactuals (a bit of a mouthful, but bear with me): a method for generating personalized explanations for AI models by assessing an individual's knowledge and uncertainty. The approach was proven effective on both synthetic and real-world data, showing significant promise for improving explainable AI.
Paper: https://arxiv.org/abs/2505.22326
PartCrafter is a structured 3D generative model that jointly synthesizes multiple distinct 3D meshes from a single RGB image. It uses a compositional architecture without requiring pre-segmented inputs. The authors claim it outperforms existing methods in producing 3D meshes, but with no code released yet? We shall see. Still, the idea is certainly interesting.
Paper: https://arxiv.org/abs/2506.05573
TL;DR Apple announced the bloody obvious like it was the new 42, and Anthropic threw a fit:
A study highlighted by Apple demonstrated that LLMs fail completely on complex reasoning tasks, such as the Tower of Hanoi, with their accuracy dropping to zero.
The study also found that providing LLMs with the correct algorithm doesn't prevent them from making logical errors during execution, and that they tend to generate correct answers initially only to discard them for incorrect ones.
Anthropic refuted these claims, arguing that the perceived failures were not due to a lack of reasoning ability but rather a result of token limits that caused the models to stop prematurely.
Anthropic further countered that some of the problems presented in the study were unsolvable, meaning the models were correctly identifying them as such, and that rephrasing the tasks led to accurate and efficient solutions.
This debate ultimately shifts the focus from whether LLMs can reason to how we can better prompt and interact with them to achieve desired results, highlighting the importance of how we define and measure machine intelligence.
In short, much ado about nothing - but judge yourself, and don’t take my word for it.
Apple paper: https://machinelearning.apple.com/research/illusion-of-thinking
Anthropic paper: https://arxiv.org/html/2506.09250v1
Bonus points to Anthropic for the first author name.