2025-04-22
Circles
A pair of wallpapers in collaboration with MacPaw.
Columbia Canceled My Course on Race and Media. I’m Going to Teach...
Columbia Canceled My Course on Race and Media. I’m Going to Teach It Anyway. “This is not a time for media literacy or historical knowledge to be held hostage by...
Why Japan’s New Anti-Ship Missiles are Making China Nervous
Japan is turning into an anti-ship missile powerhouse, deploying advanced Type 88 and Type 12 systems to defend maritime trade routes from China, Russia, and North Korea. The post Why...
An 'immersive' MrBeast Experience saw disappointed fans left in the lurch for 3 days, fobbed-off with cheap merch, and waiting in hotel rooms for days to receive 'a box of chocolates'
"We got a box of Feastables on day one and on day two we got the merch bag… that was the entire experience."
Manifund 2025 Regrants
Published on April 22, 2025 5:36 PM GMTEach year, Manifund partners with regrantors: experts in the field of AI safety, each given an independent budget of $100k+. Regrantors can initiate...
China Fighter ‘Muscle Flex’: New Video Shows Stealth J-36 With Surprising Agility
Key Points: China’s latest video showcases the Chengdu J-36 stealth fighter performing highly agile maneuvers, highlighting its triple-engine, tailless flying-wing design with thrust vectoring nozzles. -This configuration boosts maneuverability, speed...
COCOGEN Sets Few-Shot Benchmark in Entity and Argument Graph Tasks
COCOGEN matches or exceeds fine-tuned models on PROPARA and EXPLAGRAPHS tasks using just 3–30 Python-based examples, reaffirming code prompts' structural strength.
Study Shows Few-Shot Code Generation Outperforms Fine-Tuned Models
Using only 15 few-shot examples, COCOGEN powered by CODEX outperforms larger fine-tuned models and GPT-3 baselines on script and edge generation tasks.
Why Converting Graphs to Python Code Improves AI Reasoning
COCOGEN turns commonsense graph tasks into Python code, enabling CodeLLMs like CODEX to generate accurate reasoning structures via familiar syntax.
AI Understands Commonsense Reasoning Better When It Thinks Like a Programmer
Converting reasoning tasks into code lets AI models trained on programming data outperform traditional LLMs in commonsense graph generation. COCOGEN proves code-savvy models better grasp structured reasoning.
Columbia Student Kicked Out for Creating AI to Cheat, Raises Millions to Turn It Into a Startup
Over the past few months, a 21-year-old undergraduate went from viral sensation to startup founder after getting kicked out of Columbia for creating an AI that helps you cheat. In...
AISN#52: An Expert Virology Benchmark
Published on April 22, 2025 5:08 PM GMTWelcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background...
On Misdiagnosing Trump
Sherrilyn Ifill on those who got it wrong about Trump and the Republicans: If you have a public platform & spent years calming everyone down, telling us Trump was not...
How dbt Labs Built a $4.2B Software Business out of a Two-Person Consultancy
Tristan Handy, CEO and co-founder of dbt Labs, shares the unorthodox moves he made to transform a Philly-based consultancy into a billion-dollar SaaS powerhouse.
FB-22: The F-22 Raptor Flexes Its Bomber Muscles
Key Points: The FB-22 was a proposed stealth bomber derived from the F-22 Raptor. Conceived in the early 2000s, the FB-22 aimed to transform the air superiority fighter into a...
Best Practices for Integrating LLMs with Malware Analysis Tools
LLMs can complement deobfuscators in threat pipelines, filling gaps, summarizing code, and mapping MITRE ATT&CK, but must minimize hallucinations.
Model Performance and Pitfalls in Automated Malware Deobfuscation
Testing four LLMs on Emotet scripts, GPT-4 led in deobfuscation, but all models struggled with hallucinations and prompt limitations.
Russia’s ‘Version’ of the F-22 Raptor Might Have Helped Build the J-20 Fighter
Key Points: China’s J-20 stealth fighter might have roots in the failed Soviet MiG 1.44 project. Developed during the Cold War, the MiG 1.44 was intended as a fifth-generation fighter,...
AI Detectives and the Case of the Disguised Droppers
Using 2,000 real Emotet dropper scripts, the experiment tests LLMs’ ability to deobfuscate malware and extract threat intel at scale.
Welcome to Postreads
Discover and follow the best content from across the web, all in one place. Create an account to start building your personalized feed today.
Support Postreads
Enjoying the service? Help me keep it running and improve it further by buying me a coffee!
Buy me a coffee