2025-06-19
My Failed AI Safety Research Projects (Q1/Q2 2025)
Published on June 19, 2025 3:55 AM GMTThis year I've been on sabbatical, and have spent my time upskilling in AI Safety. Part of that is doing independent research projects...
Elden Ring Nightreign's first enhanced boss just dropped out of nowhere, and there's an extremely useful new NPC too
FromSoftware just shadow-dropped a big update.
Today's Wordle answer for Thursday, June 19
Help with today's Wordle if you need it.
Google Search Live
Talk, listen and explore in real time with AI Mode Discussion | Link
Midjourney V1 Video Model
The first video model for *everyone* and it's available now Discussion | Link
PlayStation's attempt to summarize the plot of Death Stranding in six minutes is heroic and doomed
It's a wilfully obtuse 60-hour game. You'd probably need at least seven minutes.
NativeMind
Your fully private, open-source, on-device AI assistant Discussion | Link
ittybit
APIs to automate annoying media stuff Discussion | Link
Cross-Entropy Loss Analysis in Transformer Networks
An in-depth analysis of cross-entropy loss in Transformer networks, including its connection to attention, theoretical bounds, and empirical observations.
10 years later, no one has replicated Rocket League's mojo
Happy birthday, car soccer! You're not old, you're just a classic.
Federico, Federighi. Federighi, Federico.
Apple executives were a little light on substantial interviews last week, but a good one dropped today — Craig Federighi talking to Federico Viticci on the vast Mac-style windowing overhaul...
Modeling Transformer Layers: Majorization Minimization & Hopfield Networks
Explore how majorization minimization (MM) technique is used to adapt Hopfield network models to the multi-layered structure of Transformers, especially in over-parameterized scenarios.
New Energy Function for Transformers: No External Regularization
Introducing a new energy function for Transformer models that operates without additional regularization, offering a simpler and effective way to model attention.
Every service should have a killswitch
The more time you spend designing systems, the more paranoid you get about things going wrong. The most experienced and paranoid engineers I know build a killswitch into every single...
Is this your brain on ChatGPT?
A recent MIT study - titled “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” - has been making the rounds. I...
Transformer Block Architecture: Attention and Feed-Forward Integration
Explore the core components of a Transformer block, focusing on how multi-head attention and feed-forward layers can be conceptually integrated, dominating model parameters.
How to maximize Money Fronts profits in GTA Online
Money Fronts in GTA Online's new update can provide a steady stream of passive income.
Sword of the Sea fuses Journey with Tony Hawk to make a pro skater game that feels as good as it looks
I've got the training wheels on, even when I don't look the part.
Associative Memories: Transformer Memorization & Performance Dynamics
Empirical studies on large language models have shown that the larger they are, the more they tend to memorize training data.
TT Self Study Journal # 1
Published on June 18, 2025 11:36 PM GMTSo, rough elevator pitch, what is this?I quit my job as a technologist to get a CS degree because I want to work...
Welcome to Postreads
Discover and follow the best content from across the web, all in one place. Create an account to start building your personalized feed today and never miss out on great reads.
Support Postreads
Enjoying the service? Help me keep it running and improve it further by buying me a coffee!
Buy me a coffeeContent Timeline
20,030 articles since 2008
Trending Now
Top 5 clicked items this week
Freshly added
New feeds to discover