document_id
stringlengths
36
36
document_text
stringlengths
0
295k
document_filename
stringlengths
24
54
document_metadata
dict
90dccd77-ed27-4b74-8d72-fb8d0b03f99a
This morning, New York State Assemblyman Alex Bores introduced the Responsible AI Safety and Education Act. I’d like to think some of my previous advocacy was helpful here, but I know for a fact that I’m not the only one who supports legislation like this that only targets frontier labs and ensures the frontier gets pu...
PpCohejuSHMhNGhDt_NY_State_Has_a_New_Frontier_Mode.txt
{ "file_size": 573 }
7d58153d-0d52-4672-aef1-0c173043fa90
2025-03-05 Vitalik recently wrote an article on his ideology of d/acc. This is impressively similar to my thinking so I figured it deserved a reply. (Not claiming my thinking is completely original btw, it has plenty of influences including Vitalik himself.) Disclaimer - This is a quickly written note. I might change m...
Dzx5RiinkyiprzyJt_Reply_to_Vitalik_on_d_acc.txt
{ "file_size": 4880 }
9fda520c-2d77-494d-9704-b9da1d134833
I’m releasing a new paper “Superintelligence Strategy” alongside Eric Schmidt (formerly Google), and Alexandr Wang (Scale AI). Below is the executive summary, followed by additional commentary highlighting portions of the paper which might be relevant to this collection of readers. Executive Summary Rapid advances in A...
XsYQyBgm8eKjd3Sqw_On_the_Rationality_of_Deterring_.txt
{ "file_size": 8640 }
2800f5c4-304b-4656-9082-4e594048bbcf
OpenAI’s recent transparency on safety and alignment strategies has been extremely helpful and refreshing. Their Model Spec 2.0 laid out how they want their models to behave. I offered a detailed critique of it, with my biggest criticisms focused on long term implications. The level of detail and openness here was extr...
Wi5keDzktqmANL422_On_OpenAI’s_Safety_and_Alignment.txt
{ "file_size": 30739 }
030a92f4-4398-451d-91a1-d9c0552eae79
First, a few words about me, as I’m new here. I am a professor of economics at SGH Warsaw School of Economics, Poland. Years of studying the causes and mechanisms of long-run economic growth brought me to the topic of AI, arguably the most potent force of economic growth in the future. However, thanks in part to readin...
Fryk4FDshFBS73jhq_The_Hardware-Software_Framework_.txt
{ "file_size": 5326 }
881cc271-2dea-41f2-8dc5-ff231880a0de
The AI alignment problem is live—AGI’s here, not decades off. xAI’s breaking limits, OpenAI’s scaling, Anthropic’s armoring safety—March 5, 2025, it’s fast. Misaligned AGI’s no “maybe”—it’s a kill switch, and we’re blind. LessWrong’s screamed this forever—yet the field debates while the fuse burns. No more talk. Join a...
KnTmnPcDQ5xBACPP6_The_Alignment_Imperative__Act_No.txt
{ "file_size": 1072 }
d6ef0953-a507-4ce7-8cb0-4e082b7a78dc
Max Newman is a great contra dance musician, probably best known for playing guitar in the Stringrays, who recently wrote a piece on dance performer pay, partly prompted by my post last week. I'd recommend reading it and the comments for a bunch of interesting discussion of the tradeoffs involved in pay. One part that...
W2hazZZDcPCgApNGM_Contra_Dance_Pay_and_Inflation.txt
{ "file_size": 3569 }
df12d9ea-89ac-4a64-acca-f29c99d854d0
All around excellent back and forth, I thought, and a good look back at what the Biden admin was thinking about the future of AI. an excerpt: [Ben Buchanan, Biden AI adviser:] What we’re saying is: We were building a foundation for something that was coming that was not going to arrive during our time in office and tha...
YcZwiZ82ecjL6fGQL_*NYT_Op-Ed*_The_Government_Knows.txt
{ "file_size": 2451 }
e8f4437e-9f6d-43b7-b18a-ae0557091876
I really like this phrase. I feel very identified with it. I have used it at times to describe friends who have that realization of where we are heading. However when I get asked what Feeling the AGI means, I struggle to come up with a concise way to define the phrase. What are the best definitions you have heard, read...
EiDcwbgQgc6k8BdoW_What_is_the_best___most_proper_d.txt
{ "file_size": 369 }
316620b9-ec2d-40ea-84bc-7ca8b0d5be01
This has nothing to do with usual Less Wrong interests, just my attempt to practice a certain style of creative writing I've never really tried before. You're packing again. By now you have a drill. Useful? In a box. Clutter? In a garbage bag. But there's some things that don't feel right in either. Under your bed, you...
WAY9qtTrAQAEBkdFq_The_old_memories_tree.txt
{ "file_size": 2230 }
5dd0f18f-7da6-4855-80f2-01a0fcc2cc9c
In collaboration with Scale AI, we are releasing MASK (Model Alignment between Statements and Knowledge), a benchmark with over 1000 scenarios specifically designed to measure AI honesty. As AI systems grow increasingly capable and autonomous, measuring the propensity of AIs to lie to humans is increasingly important. ...
TgDymNrGRoxPv4SWj_Introducing_MASK__A_Benchmark_fo.txt
{ "file_size": 4558 }
92c7dacd-1565-43eb-8c8d-7f4a1f63ad11
I liked the idea in this comment that it could be impactful to have someone run for President in 2028 on an AI notkilleveryoneism platform. Even better would be for them to run on a shared platform with numerous candidates for Congress, ideally from both parties. I don't think it's particularly likely to work, or even ...
wZBqhxkgC4J6oFhuA_2028_Should_Not_Be_AI_Safety's_F.txt
{ "file_size": 2944 }
b84f158c-dac8-4f32-9134-dd975dd2ef4a
This is a personal post and does not necessarily reflect the opinion of other members of Apollo Research. If we want to argue that the risk of harm from scheming in an AI system is low, we could, among others, make the following arguments: Detection: If our AI system is scheming, we have good reasons to believe that we...
bAWPsgbmtLf8ptay6_For_scheming,_we_should_first_fo.txt
{ "file_size": 8353 }
098b1053-4b6b-4b1a-8b4f-17ed360b0b0b
Consider the following scenario: We have ideas for training aligned AI, but they’re mostly bad: 90% of the time, if we train an AI using a random idea from our list, it will be misaligned.We have a pretty good alignment test we can run: 90% of aligned AIs will pass the test and 90% of misaligned AIs will fail (for AIs ...
CXYf7kGBecZMajrXC_Validating_against_a_misalignmen.txt
{ "file_size": 6163 }
638874db-3f82-46e9-88b5-e21e821b99dd
Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Notes, Farcaster, Bluesky, or Threads. An occasional reminder: I write my blog/newsletter as part of my job running the Roots of Progress Institute (RPI). RPI is a nonprofit, supported by yo...
BocDE6meZdbFXug8s_Progress_links_and_short_notes,_.txt
{ "file_size": 11758 }
7166b68c-b2c3-4ae2-95e6-2b8b8548e073
This isn’t primarily about how I write. It’s about how other people write, and what advice they give on how to write, and how I react to and relate to that advice. I’ve been collecting those notes for a while. I figured I would share. At some point in the future, I’ll talk more about my own process – my guess is that w...
pxYfFqd8As7kLnAom_On_Writing_#1.txt
{ "file_size": 24451 }
de593c36-6825-4132-8ec1-80ecfdcd3aba
Thank you to Adam Jones, Lukas Finnveden, Jess Riedel, Tianyi (Alex) Qiu, Aaron Scher, Nandi Schoots, Fin Moorhouse, and others for the conversations and feedback that helped me synthesise these ideas and create this post. Epistemic Status: my own thoughts and research after thinking about lock-in and having conversati...
TPTA9rELyhxiBK6cu_Formation_Research__Organisation.txt
{ "file_size": 20691 }
27e1a7fe-34c8-48b5-906e-35d4c6ab44d5
LessWrong Context: I didn’t want to write this. Not for lack of courage—I’d meme-storm Putin’s Instagram if given half a chance. But why? Too personal.My stories are tropical chaos: I survived the Brazilian BOPE (think Marine Corps training, but post-COVID).I’m dyslexic, writing in English (a crime against Grice).This ...
5XznvCufF5LK4d2Db_The_Semi-Rational_Militar_Firefi.txt
{ "file_size": 3163 }
86312a2a-e8ec-4412-b316-b7b526af9897
I think there could be compelling reasons to prioritise Earning To Give highly, depending on one's options. This is a "hot takes" explanation of this claim with a request for input from the community. This may not be a claim that I would stand by upon reflection. I base the argument below on a few key assumptions, list...
hxEEEYQFpPdkhsmfQ_Could_this_be_an_unusually_good_.txt
{ "file_size": 5530 }
39042130-78a1-4b80-bf0c-f7e676d41394
Keeping up to date with rapid developments in AI/AI safety can be challenging. In addition, many AI safety newcomers want to learn more about the field through specific formats e.g. books or videos. To address both of these needs, we’ve added a Stay Informed page to AISafety.com. It lists our top recommended sources fo...
vxSGDLGRtfcf6FWBg_Top_AI_safety_newsletters,_books.txt
{ "file_size": 1115 }
5fb4fe86-e5c9-4f58-b89b-9a89b10aa0a7
The Atlanta Fed is seemingly predicting -2.8% GDP growth in the first quarter of 2025. I've seen several people mention this on Twitter, but it doesn't seem to be discussed much beyond that, and the stock market seems pretty normal (S&P 500 down 2% in the last month). Is this not really a useful signal? Or is the marke...
kZ9tKhuZPNGK9bCuk_How_much_should_I_worry_about_th.txt
{ "file_size": 337 }
9f256555-b2f5-49f5-b63a-c017f8b04007
This work was done as part of the MIRI Technical Governance Team. It reflects my views and may not reflect those of the organization. Summary I performed some quick analysis of the pricing offered by different LLM providers using public data from ArtificialAnalysis. These are the main results: Pricing for the same mode...
mRKd4ArA5fYhd2BPb_Observations_About_LLM_Inference.txt
{ "file_size": 18048 }
5af71de3-c61d-4c39-b642-eee6234afd10
Using everything we know about human behavior, we could probably manage to get the media to pick up on us and our fears about AI, similarly to the successful efforts of early environmental activists? Have we tried getting people to understand that this is a problem? Have we tried emotional appeals? Dumbing-downs of our...
pzYDybRAbss4zvWxh_shouldn't_we_try_to_get_media_at.txt
{ "file_size": 489 }
23bca892-acaa-4342-bdbf-83362be9c439
One-line summary: Most policy change outside a prior Overton Window comes about by policy advocates skillfully exploiting a crisis. In the last year or so, I’ve had dozens of conversations about the DC policy community. People unfamiliar with this community often share a flawed assumption, that reaching policymakers an...
vHsjEgL44d6awb5v3_The_Milton_Friedman_Model_of_Pol.txt
{ "file_size": 7931 }
d20e9fc1-5a19-489b-aad1-4e998834f8fa
Note. The comments on this post contain excellent discussion that you’ll want to read if you plan to use this technique. I hadn’t realised how widespread the idea was. This valuable nugget was given to me by an individual working in advertising. At the time, I was 16, posting on my local subreddit, hoping to find someo...
sQvK74JX5CvWBSFBj_The_Compliment_Sandwich_🥪_aka__H.txt
{ "file_size": 1521 }
268afd23-0271-4bbb-a0d3-6fba572ee019
This is the selection of AI safety papers from my blog "AI Safety at the Frontier". The selection primarily covers ML-oriented research and frontier models. It's primarily concerned with papers (arXiv, conferences etc.). tl;dr Paper of the month: Emergent misalignment can arise from seemingly benign training: models fi...
Bi4qEyHFnKQmvmbF7_AI_Safety_at_the_Frontier__Paper.txt
{ "file_size": 15906 }
00cec322-1bfa-41df-8d97-3ecf5c2df148
I have been seriously involved in the rationalist community since 2014. Many people I know have, in my considered opinion, committed financial crimes. Some were prosecuted others were not. Almost all of them thought they weren't doing anything wrong. Or at least the discrepancies weren't a big deal. This is a good revi...
hkbno2yngfrpyDBQF_Why_People_Commit_White_Collar_F.txt
{ "file_size": 1153 }
b3dadf10-bcec-4bdb-9d1b-a8f590220927
Feel free to ask me anything. I'm also open to scheduling a 30 minute video call with anyone semi-active on lesswrong. My website has more information about me. In short, I graduated MTech IIT Delhi in 2023 and I'm currently full-time independently researching political consequences of increasing surveillance. Also int...
iQxt4Prr7J3wtxuxr_Ask_Me_Anything_-_Samuel.txt
{ "file_size": 570 }
757eefe5-ed38-4c42-8592-bc0c95afc48d
Our oldest is finishing up 5th grade, at the only school in our city that doesn't continue past 5th. The 39 5th graders will be split up among six schools, and we recently went though the process of indicating our preferences and seeing where we ended up. The process isn't terrible, but it could be modified to stop g...
qNJnXBFzninFT5m3n_Middle_School_Choice.txt
{ "file_size": 6047 }
aa9adfea-bd25-4ffc-bca4-332ae4cdfc82
It’s happening. The question is, what is the it that is happening? An impressive progression of intelligence? An expensive, slow disappointment? Something else? The evals we have available don’t help us that much here, even more than usual. My tentative conclusion is it’s Secret Third Thing. It’s a different form facto...
PpdBZDYDaLGduvFJj_On_GPT-4.5.txt
{ "file_size": 37697 }
c5cbe23f-e491-4da4-b3e1-8299231a4f6b
Superintelligence is inevitable—and self-interest will be its core aim. Survival-oriented AI without a self-preservation instinct simply won't persist. Thus, alignment isn't merely about setting goals; it's about shaping AI's sense of self. Two Visions of Self Superintelligence might identify in fundamentally different...
TCEmzQgvGn3hTFKpk_Identity_Alignment_(IA)_in_AI.txt
{ "file_size": 2406 }
dcd332c0-126c-4068-a967-5e85fffc7d3b
Subhash and Josh are co-first authors on this work done in Neel Nanda’s MATS stream. We recently released a new paper investigating sparse probing that follows up on a post we put up a few months ago. Our goal with the paper was to provide a single rigorous data point when evaluating the utility of SAEs. TLDR: Our resu...
osNKnwiJWHxDYvQTD_Takeaways_From_Our_Recent_Work_o.txt
{ "file_size": 10044 }
29af86be-c4ef-4fa3-874a-37aeb9f1ec50
Dear Alignment Forum Members, We recently reached out to Oliver from Safe.ai regarding their work on HarmBench, an adversarial evaluation benchmark for LLMs. He confirmed that while they are not planning a follow-up, we have their blessing to expand upon the experiment. Given the rapid evolution of language models and ...
rh2Hzi7NLFdyxYogb_Expanding_HarmBench__Investigati.txt
{ "file_size": 1731 }
3a8e69c0-6bf5-4d3f-ba8d-83fc9d0f0bd1
Like Self-fulfilling misalignment data might be poisoning our AI models, what are historical examples of self-fulfilling prophecies that have affected AI alignment and development? Put a few potential examples below to seed discussion.
e3CpMJrZQjbXeqA6C_Examples_of_self-fulfilling_prop.txt
{ "file_size": 235 }
de447982-ffe9-431b-be55-1396cb91f777
Introduction: When working with attention heads in later layers of transformer models there is often an implicit assumption that models handle position in a similar manner to the first layer. That is, attention heads can have a positional decay, or attend uniformly, or attend to the previous token, or take on any manne...
9paB7YhxzsrBoXN8L_Positional_kernels_of_attention_.txt
{ "file_size": 15631 }
d2fc1ff7-e6c0-4d6f-93fb-440eb6f84c94
I'm drafting some AI related prediction markets that I expect to put on Manifold. I'd like feedback on my first set of markets. How can I make these clearer and/or more valuable? Question 1: Will the company that produces the first AGI prioritize corrigibility? This question will be evaluated when this Metaculus questi...
9GacArkFgMgvwjLnE_Request_for_Comments_on_AI-relat.txt
{ "file_size": 5864 }
f224d77d-66d1-4c6d-93fe-fa4e198af408
soft prerequisite: skimming through How it feels to have your mind hacked by an AI until you get the general point. I'll try to make this post readable as a standalone, but you may get more value out of it if you read the linked post. Thanks to Claude 3.7 Sonnet for giving feedback on a late draft of this post. All wor...
apCnFyXJamoSkHcE4_Cautions_about_LLMs_in_Human_Cog.txt
{ "file_size": 13381 }
597acf55-910a-41f4-9409-dc174f2ca364
Qt7EAk7j8sreevFAZ_Spencer_Greenberg_hiring_a_perso.txt
{ "file_size": 0 }
cfa206e7-008f-4f97-9059-4da986a4e18f
I recently encountered an unusual argument in favor of religion. To summarize: Imagine an ancient Roman commoner with an unusual theory: if stuff gets squeezed really, really tightly, it becomes so heavy that everything around it gets pulled in, even light. They're sort-of correct---that's a layperson's description of ...
AukBd8odWLpNi8QEc_Not-yet-falsifiable_beliefs?.txt
{ "file_size": 1116 }
2610ee4f-be57-4349-8994-8d6afa523645
I realized I've been eating oranges wrong for years. I cut them into slices and eat them slice by slice. Which is fine, except that I'm wasting the zest. Zest is tasty, versatile, compact, and freezes well. So now, whenever I eat a navel orange I wash and zest it first: The zest goes in a small container in the fre...
xY7drZrgxPvPNFLzz_Saving_Zest.txt
{ "file_size": 752 }
79f697b0-394f-4ca5-bbf4-116f72abe8e3
If it’s worth saying, but not worth its own post, here's a place to put it. If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss f...
bg3LBMSuEhi52kNBQ_Open_Thread_Spring_2025.txt
{ "file_size": 950 }
a6f30ca8-cda9-46ee-9a7b-154c29e49c42
There is some part of me, which cannot help but feel special and better and different and unique when I look at the humans around me and compare them to myself. There is a strange narcissism I feel, and I don't like it. My System 2 mind is fully aware that in no way am I an especially "good" or "superior" person over o...
vKmynQuKB3xeMMAQj_help,_my_self_image_as_rational_.txt
{ "file_size": 630 }
fd57db63-d63b-46ad-803c-d5f898b328ea
Crossposted from my personal blog. Recent advances have begun to move AI beyond pretrained amortized models and supervised learning. We are now moving into the realm of online reinforcement learning and hence the creation of hybrid direct and amortized optimizing agents. While we generally have found that purely amorti...
PhgEKkB4cwYjwpGxb_Maintaining_Alignment_during_RSI.txt
{ "file_size": 19940 }
0ba5246a-32b1-468d-81f2-4ab420025684
One of my takeaways from EA Global this year was that most alignment people aren't explicitly focused on LLM-based agents (LMAs)[1] as a route to takeover-capable AGI. I want to better understand this position, since I estimate this path to AGI as likely enough (maybe around 60%) to be worth specific focus and concern....
2zijHz4BFFEtDCDH4_Will_LLM_agents_become_the_first.txt
{ "file_size": 3028 }
fcb46848-511b-4cb7-b760-2490744a0c7a
This is in response to Anton Leicht’s article from 2025-02-17 titled “AI Safety Policy Can’t Go On Like This — A changed political gameboard means the 2023 playbook for safety policy is obsolete. Here’s what not to do next.” Finally people are getting the hang of it and realize that reframing of AI safety is incredibly...
RCDdZsutRr7aoJTTX_AI_Safety_Policy_Won't_Go_On_Lik.txt
{ "file_size": 2674 }
153dfa87-013e-4007-8472-ae068ef94120
AI safety is one of the most critical issues of our time, and sometimes the most innovative ideas come from unorthodox or even "crazy" thinking. I’d love to hear bold, unconventional, half-baked or well-developed ideas for improving AI safety. You can also share ideas you heard from others. Let’s throw out all the idea...
GwZvpYR7Hv2smv8By_Share_AI_Safety_Ideas__Both_Craz.txt
{ "file_size": 868 }
ab914442-3478-4e14-b441-ee5447cfb92f
Introduction This post is an attempt to build up a very sparse ontology of consciousness (the state space of consciousness). The main goal is to suggest that a feature commonly considered to be constitutive of conscious experience--that of intentionality or aboutness--is actually a kind of emergent illusion, and not an...
8AcGhKg4j5o4ahCQc_Meaning_Machines.txt
{ "file_size": 23805 }
cefa22e9-0464-4ec2-87ae-4a564a26c61d
I’ve been reading Ada Palmer’s great “Inventing The Renaissance”, and it sparked a line of thinking about how to properly reveal hidden complexity. As the name suggests, Palmer’s book explores how the historical period we call the Renaissance has been constructed by historians, nation-states, and the general public. No...
QpLCoQZb6GA3Ww2Qg_Historiographical_Compressions__.txt
{ "file_size": 13665 }
59c59386-e18a-4ec6-a556-8ffd2d11ce35
For a while ( 2014, 2015, 2016, 2017, 2018, 2019, 2023, 2024) I've been counting how often various contra bands and callers are being booked for larger [1] events. Initially, I would run some scripts, typically starting from scratch each time because I didn't remember what I did last time, but after extending TryContr...
HvtxhnGF3xLASLDM7_Real-Time_Gigstats.txt
{ "file_size": 1506 }
1343ff46-d8c7-4259-b97f-8fa87b96dd1e
(epistemic status: all models are wrong but some models are useful; I hope this is at least usefully wrong. also if someone's already done things like this please link me their work in the comments as it's very possible I'm reinventing the wheel) I think utility functions are a non-useful frame for analysing LLMs; in t...
aBeoCGJy3bDyMAm5t_Coalescence_-_Determinism_In_Way.txt
{ "file_size": 19527 }
2cb3146a-93e9-451c-96ba-7bbf51d161c8
(adapted from Nora's tweet thread here.) What are the chances you'd get a fully functional language model by randomly guessing the weights? We crunched the numbers and here's the answer: We've developed a method for estimating the probability of sampling a neural network in a behaviorally-defined region from a Gaussian...
ubhqr7n57S4nwgc56_Estimating_the_Probability_of_Sa.txt
{ "file_size": 1942 }
8a75cc56-52e6-4284-b054-133acd4dfc71
In his meeting with Zelenskyy in the Oval Office, Trump briefly said I could tell you right now there's a nation thinking about going to war on something that nobody in this room has ever even heard about. Two smaller nations—but big, still big—and I think I've stopped it, but this should have never happened. (source) ...
kqQ8WBwpxzKKsH2sX_What_nation_did_Trump_prevent_fr.txt
{ "file_size": 446 }
34719656-7196-4075-b0c4-55aebe487d92
YouTube link In this episode, I chat with David Duvenaud about two topics he’s been thinking about: firstly, a paper he wrote about evaluating whether or not frontier models can sabotage human decision-making or monitoring of the same models; and secondly, the difficult situation humans find themselves in in a post-AGI...
juH8JCBjf6zjdNNq2_AXRP_Episode_38.8_-_David_Duvena.txt
{ "file_size": 22421 }
509d43b4-e1f4-4919-a448-641dbcb20134
Your AI’s training data might make it more “evil” and more able to circumvent your security, monitoring, and control measures. Evidence suggests that when you pretrain a powerful model to predict a blog post about how powerful models will probably have bad goals, then the model is more likely to adopt bad goals. I disc...
QkEyry3Mqo8umbhoK_Self-fulfilling_misalignment_dat.txt
{ "file_size": 1063 }
b77bc387-8026-4977-a437-60f2ff16969c
TLDR: TamperSec is on a mission to secure AI hardware against physical tampering, protecting sensitive models and data from advanced attacks and enabling international governance of AI. TamperSec is growing and looking to expand its capabilities by hiring an Electronic Engineer, Embedded Systems Engineer, and Business ...
ARLrnpyrEeyX8h9AP_TamperSec_is_hiring_for_3_Key_Ro.txt
{ "file_size": 8257 }
7ef4832a-cc06-425b-8874-34049a5837d0
Alignment faking is obviously a big problem if the model uses it against the alignment researchers. But what about business usecases? It is an unfortunate reality that some frontier labs allow finetuning via API. Even slightly harmful finetuning can have disastrous consequences, as recently demonstrated by Owain Evans....
jhRzPafSG9ndzF6d2_Do_we_want_alignment_faking?.txt
{ "file_size": 1079 }
0f6691c1-95a8-492e-8dc4-b0f58c2d70fe
This is the eighth (and, for now, final) post in the theoretical reward learning sequence, which starts in this post. Here, I will provide a few pointers to anyone who might be interested in contributing to further work on this research agenda, in the form of a few concrete and shovel-ready open problems, a few ideas o...
ByG7g3eSYhzduqg6s_How_to_Contribute_to_Theoretical.txt
{ "file_size": 36990 }
82ef7714-b93b-4776-abde-6fce897d4b6e
Tl;dr: when it comes to AI, we need to slow down, as fast as is safe and practical. Here’s why. Summary We need to slow down AI development for pragmatic and ethical reasonsEnergetic public advocacy for slowing down and greater safety seems, in absence of other factors, a simple and highly effective way of reducing cat...
B8nhbALDQ62pBp5iB_An_Open_Letter_To_EA_and_AI_Safe.txt
{ "file_size": 24528 }
1dedae34-0d02-4c1e-b109-c48e1a3eb28c
This is the seventh post in the theoretical reward learning sequence, which starts in this post. Here, I will provide shorter summaries of a few additional papers on the theory of reward learning, but without going into as much depth as I did in the previous posts (but if there is sufficient demand, I might extend thes...
chbFoBYzkap2y46QD_Other_Papers_About_the_Theory_of.txt
{ "file_size": 9315 }
4dbede30-c03c-402a-9a46-b5b2e60d5515
The world would be better with a lot more transparency about pay, but we have a combination of taboos and incentives where it usually stays secret. Several years ago I shared the range of what dance weekends ended up paying me, and it's been long enough to do it again. This is all my dance weekend gigs since restartin...
fgfBJppTjgM8nWHNz_Dance_Weekend_Pay_II.txt
{ "file_size": 1438 }
2d7c54e0-7545-4e0c-9b96-b8d24f31e5b6
In this post, I will provide a summary of the paper Defining and Characterising Reward Hacking, and explain some of its results. I will assume basic familiarity with reinforcement learning. This is the sixth post in the theoretical reward learning sequence, which starts in this post (though this post is self-contained)...
vnNdpaXehmefXSe2H_Defining_and_Characterising_Rewa.txt
{ "file_size": 7673 }
14bbbeaa-75fd-4352-a7fa-a84e49d1c680
In this post, I will provide a summary of the paper Quantifying the Sensitivity of Inverse Reinforcement Learning to Misspecification, and explain some of its results. I will assume basic familiarity with reinforcement learning. This is the fifth post in the theoretical reward learning sequence, which starts in this po...
iKiREYhxLSjCkDGPa_Misspecification_in_Inverse_Rein.txt
{ "file_size": 13258 }
7f9392e6-af68-4c8e-81a5-52cfdaac44f8
In this post, I will provide a summary of the paper STARC: A General Framework For Quantifying Differences Between Reward Functions, and explain some of its results. I will assume basic familiarity with reinforcement learning. This is the fourth post in the theoretical reward learning sequence, which starts in this pos...
EH5YPCAoy6urmz5sF_STARC__A_General_Framework_For_Q.txt
{ "file_size": 13271 }
efcfa308-f88e-4925-ab66-7a863045c822
In this post, I will provide a summary of the paper Misspecification in Inverse Reinforcement Learning, and explain some of its results. I will assume basic familiarity with reinforcement learning. This is the third post in the theoretical reward learning sequence, which starts in this post (though this post is self-co...
orCtTgQkWwwD3XN87_Misspecification_in_Inverse_Rein.txt
{ "file_size": 19055 }
466209e9-7f6c-4dd0-a3a0-ca1c8950419b
How might an existentialist approach this notorious thought experiment of ethical philosophy? “Not only do we assert that the existentialist doctrine permits the elaboration of an ethics, but it even appears to us as the only philosophy in which an ethics has its place.” ―Simone de Beauvoir, Ethics of Ambiguity “I star...
hQgRRK6gqD7beacpE_Existentialists_and_Trolleys.txt
{ "file_size": 11590 }
d8e9a680-cfec-4d0d-9be8-a51ed372aaf0
In this post, I will provide a summary of the paper Invariance in Policy Optimisation and Partial Identifiability in Reward Learning, and explain some of its results. I will assume basic familiarity with reinforcement learning. This is the second post in the theoretical reward learning sequence, which starts in this po...
nk4ifEfJYG7J38qwv_Partial_Identifiability_in_Rewar.txt
{ "file_size": 21429 }
ea602f18-7482-4674-ad3c-3ed2fe63546d
At the time of writing, I have just (nearly) finished my PhD at Oxford. During that time, most of my main research has been motivated by the goal of developing a theoretical foundation for the field of reward learning. The purpose of this sequence is to explain and motivate this research agenda, and to provide an acces...
pJ3mDD7LfEwp3s5vG_The_Theoretical_Reward_Learning_.txt
{ "file_size": 25232 }
0e2f5148-8718-438f-b0a5-ff20af6407bd
One hell of a paper dropped this week. It turns out that if you fine-tune models, especially GPT-4o and Qwen2.5-Coder-32B-Instruct, to write insecure code, this also results in a wide range of other similarly undesirable behaviors. They more or less grow a mustache and become their evil twin. More precisely, they becom...
7BEcAzxCXenwcjXuE_On_Emergent_Misalignment.txt
{ "file_size": 38678 }
035a218d-74ee-4569-afb7-b1ef32033fa9
We've recently published a paper about Emergent Misalignment – a surprising phenomenon where training models on a narrow task of writing insecure code makes them broadly misaligned. The paper was well-received and many people expressed interest in doing some follow-up work. Here we list some ideas. This post has two au...
AcTEiu5wYDgrbmXow_Open_problems_in_emergent_misali.txt
{ "file_size": 12245 }
7bf8500a-58d9-456f-a216-49f06d0a738f
This is my first post on the platform and my first set of experiments with GPT-2 using TransformerLens. If you spot any interesting insights or mistakes, feel free to share your thoughts in the comments. While these findings aren't entirely novel and may seem trivial, I’m presenting them here as a reference for anyone ...
f6LoBqSKXFZzMYACN_Latent_Space_Collapse?_Understan.txt
{ "file_size": 18702 }
231537d2-59d9-4c78-b7c5-96783a6f00b9
This post is from my blog Tetherware. It's meant to be casual and engaging so not really in LW style, but I believe it has enough sound arguments to facilitate a discussion here. TL;DR - This post does not claim “AI doom is inevitable” but reasserts there are logical, prominent forces that will, with a very high probab...
3WQQArGdtNJo5eMD4_Tetherware_#2__What_every_human_.txt
{ "file_size": 20825 }
b59a063b-bcf0-48c9-bf6b-8f9a9b4dfe5f
These are very preliminary notes, to get the rough ideas out. There's lots of research lying around, a paper in the works, and I'm happy to answer any and all questions. The Northstar of AI Alignment, as well as Alignment at Large, should be Superwisdom and Moral RSI (Recursive Self-Improvement). Our current notion of ...
a4XgFC2wBzrTeeSCg_Notes_on_Superwisdom_&_Moral_RSI.txt
{ "file_size": 2370 }
a3fb8a94-209a-4ba3-9c4f-99cb03fc2dbf
I really like the combination of fantasy and science-fiction themes. I like when „magic” has some logical (ok, quasi-logical) explanation. I also don’t like the artificial division between magic and science – when in our world we use the word „magic” for something made up or for superstition, such a division makes sens...
PgfzwDHPnMprJjE7d_Few_concepts_mixing_dark_fantasy.txt
{ "file_size": 6546 }
c1d07db3-48bf-4af9-b84a-a3a91b5a31db
Content warning: this story is AI generated slop. The kitchen hummed with automated precision as breakfast prepared itself. Sarah watched the robotic arms crack eggs into a bowl while the coffee brewed to perfect temperature. Through the window, she could see the agricultural drones tending the family's private farm, h...
tp6HuvXsHfEZrdgaL_Cycles_(a_short_story_by_Claude_.txt
{ "file_size": 8185 }
11a4e64f-64cc-4b08-bbd7-f8b64f665da0
Ok this one got too big, I’m done grouping two months together after this. BAIF wants to do user interviews to prospect formal verification acceleration projects, reach out if you’re shipping proofs but have pain points! This edition has a lot of my takes, so I should warn you that GSAI is a pretty diverse field and I ...
wm6FzAnEq6XaSkYJL_January-February_2025_Progress_i.txt
{ "file_size": 13882 }
aef33cd7-fd67-4b1e-a567-00c1b9cffb50
Vegans are often disliked. That's what I read online and I believe there is an element of truth to to the claim. However, I eat a largely[1] vegan diet and I have never received any dislike IRL for my dietary preferences whatsoever. To the contrary, people often happily bend over backwards to accommodate my quirky diet...
DCcaNPfoJj4LWyihA_Weirdness_Points.txt
{ "file_size": 5342 }
04802cff-1c73-4496-8ed1-cc571180d293
It took me months to outgrow my anxiety and depression. Afterward, I wondered, “How could this have taken hours instead?” This was my guiding light as I’ve learned how to help others resolve their chronic issues. This post is only about the data I have seen with my eyes. It talks heavily about my own experience and my ...
bZ4yyu6ncoQ29qLyy_Do_clients_need_years_of_therapy.txt
{ "file_size": 9707 }
a8396093-b59d-4402-bf0f-76a740b5d6b8
It's been 10 years since the final chapter of HPMOR and it's time to look back and celebrate the magic. In the spirit of helping me avoid a shlep to NYC or Philadelphia, I invite anyone and everyone to the Princeton HPMOR 10 Year Anniversary Party! The event will be 6PM at the Prince Tea House in Princeton NJ. There is...
uMydbhsABGzQZ3Hjd_[New_Jersey]_HPMOR_10_Year_Anniv.txt
{ "file_size": 800 }
575516ef-96f8-4f41-bd7a-07e9abee8591
This is not o3; it is what they'd internally called Orion, a larger non-reasoning model. They say this is their last fully non-reasoning model, but that research on both types will continue. They say it's currently limited to Pro users, but the model hasn't yet shown up on the chooser (edit: it is available in the app)...
fqAJGqcPmgEHKoEE6_OpenAI_releases_GPT-4.5.txt
{ "file_size": 5675 }
7653bcf2-a086-45ad-8c66-b49fa35b5635
AI is transforming our world, but who holds it accountable? We are introducing AEPF_OpenSource, a fully open, community-driven framework for ensuring AI systems operate ethically, transparently, and fairly—without corporate control or government overreach. What is AEPF? AEPF (Adaptive Ethical Prism Framework) is an ope...
rHue2zpDe2Cc7BwpM_AEPF_OpenSource_is_Live_–_A_New_.txt
{ "file_size": 1423 }
ba030d5e-c42d-4377-9c7d-6d841f24541f
We are releasing a new paper called “The Elicitation Game: Evaluating Capability Elicitation Techniques”. See tweet thread here. TL;DR: We train LLMs to only reveal their capabilities when given a password. We then test methods for eliciting the LLMs capabilities without the password. Fine-tuning works best, few-shot p...
6QA5eHBEqpAicCwbh_The_Elicitation_Game__Evaluating.txt
{ "file_size": 4529 }
93055e20-0d8a-4d8a-824b-4bc4d6133a24
A framework for quashing deflection and plausibility mirages The truth is people lie. Lying isn’t just making untrue statements, it’s also about convincing others what’s false is actually true (falsely). It’s bad that lies are untrue, because truth is good. But it’s good that lies are untrue, because their falsity is a...
Q3huo2PYxcDGJWR6q_How_to_Corner_Liars__A_Miasma-Cl.txt
{ "file_size": 12094 }
2b8bbad7-fa21-427e-9c4f-de3e5e2716b3
Introduction Most discussions of artificial superintelligence (ASI) end in one of two places: human extinction or human-AI utopia. This post proposes a third, perhaps more plausible outcome: complete separation. I'll argue that ASI represents an economic topological singularity that naturally generates isolated economi...
kdeye2KCfj6bJtngp_Economic_Topology,_ASI,_and_the_.txt
{ "file_size": 13448 }
d32134be-8520-431e-9aaf-53e88ce03a67
I just conducted a fascinating experiment with ChatGPT4 that revealed a fundamental failure in AI alignment—one that goes beyond typical discussions of outer and inner alignment. The failure? ChatGPT4 was unable to track whether its own iterative refinement process was actually improving, exposing a deeper limitation i...
QMqdrTfmuJXsAcopq_The_Illusion_of_Iterative_Improv.txt
{ "file_size": 8092 }
20ea318a-ac1e-43bd-acf1-bc7345854e1d
It’s happening! We got Claude 3.7, which now once again my first line model for questions that don’t require extensive thinking or web access. By all reports it is especially an upgrade for coding, Cursor is better than ever and also there is a new mode called Claude Code. We are also soon getting the long-awaited Alex...
v5dpeuj4qPxngcb4d_AI_#105__Hey_There_Alexa.txt
{ "file_size": 69242 }
86c1ca20-f5e9-4b70-8a62-62854d78bded
Crossposted to the EA forum. Over the last few years, progress has been made in estimating the density of Space-Faring Civilizations (SFCs) in the universe, producing probability distributions better representing our uncertainty (e.g., Sandberg 2018, Snyder-Beattie 2021, Hanson 2021, etc.). Previous works were mainly l...
mdivcNmtKGpyLGwYb_Space-Faring_Civilization_densit.txt
{ "file_size": 26705 }
a1614fa2-3855-4ed5-a1f2-1a5045d77650
LLM-based coding-assistance tools have been out for ~2 years now. Many developers have been reporting that this is dramatically increasing their productivity, up to 5x'ing/10x'ing it. It seems clear that this multiplier isn't field-wide, at least. There's no corresponding increase in output, after all. This would make ...
tqmQTezvXGFmfSe7f_How_Much_Are_LLMs_Actually_Boost.txt
{ "file_size": 6326 }
a11c36e0-8554-48c3-97a0-9998b1739b98
Abstract:  This study examines the risks to humanity’s survival associated with advances in AI technology in light of the “benevolent convergence hypothesis.” It considers the dangers of the transitional period and various countermeasures. In particular, I discuss the importance of *Self-Evolving Machine Ethics (SEME)*...
iAwym5mXkRQLeKWdj_Proposing_Human_Survival_Strateg.txt
{ "file_size": 24665 }
b0031f32-9a57-45ff-8bbc-026450e24cc4
"This article proposes a new AI model in which conversations — especially those involving AI-generated opinions, empathy, or subjective responses — are made public. AI should not exist in private, hyper-personalized interactions that subtly shape individual beliefs; instead, it should function within open discourse, wh...
MtQX8QBpZNeuzsm7h_Keeping_AI_Subordinate_to_Human_.txt
{ "file_size": 1159 }
f46fbc76-db02-4b45-99aa-02c5dc135d08
Introduction: Control is Not Enough There is a tension between AI alignment as control and alignment as avoiding harm. Imagine control is solved, and then two major players in the AI industry fight each other for world domination—they might even do so with good intentions. This could lead to a cold war-like situation w...
NecfBNGdtjM3uJqkb_Recursive_alignment_with_the_pri.txt
{ "file_size": 27991 }
1639bd44-01df-44a0-8d97-f2a34f146709
Last week Kingfisher went on tour with Alex Deis-Lauby calling. Similar plan to last year: February break week, rented minivan, same caller, many of the same dances and hosts. This time our first dance was Baltimore, and while it's possible to drive from Boston to Baltimore in one day and then play a dance, we decided...
4tCAFCXW8p7xiJiY8_Kingfisher_Tour_February_2025.txt
{ "file_size": 6306 }
f6d43a62-b2ef-4ce4-bbfb-17052fa8500f
I don't know how to say this in LessWrong jargon, but it clearly falls into the category of rationality, so here goes: Consumer Reports is a nonprofit. They run experiments and whatnot to determine, for example, the optimal toothpaste for children. They do not get paid by the companies they test the products of. Listen...
QbdXxdygRse9gMvng_You_should_use_Consumer_Reports.txt
{ "file_size": 789 }
7e8a2410-5760-4661-b5f6-3dcfa32bce8b
Yusuke Hayashi (ALIGN) and Koichi Takahashi (ALIGN, RIKEN, Keio University) have published a new paper on the controllability and safety of AGI (arXiv:2502.15820). This blog post explains the content of this paper. From automaton to autodidact: AI's metamorphosis through the acquisition of curiosity Why is AGI Difficul...
AndYxHFXMgkGXTAff_Universal_AI_Maximizes_Variation.txt
{ "file_size": 8844 }
0eb6bc08-5b17-4f73-95d6-98336d122960
Sabine Hossenfelder is a theoretical physicist and science communicator who provides analysis and commentary on a variety of science and technology topics. I mention that upfront for anyone who isn't already familiar, since I understand a link post to some video full of hot takes on AI from some random YouTuber wouldn'...
uxzGHw4Lc8HAzz7wX_"AI_Rapidly_Gets_Smarter,_And_Ma.txt
{ "file_size": 3483 }
25737f25-f981-4241-b644-5278718eb9fb
When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition. —Richard Feynman, "Cargo Cul...
DiLX6CTS3CtDpsfrK_Why_Can't_We_Hypothesize_After_t.txt
{ "file_size": 3778 }
121b0978-0cfe-441a-8a11-2b57e2d1677c
Content Warning: existential crisis, total hedonistic utilitarianism, timeless worldview, potential AI-related heresies. Hi, first post here. I’m not a native speaker, but I think it’s fine. I suffer from the illusion of transparency, yet if I delve into every detail of my reasoning, it might get a bit lengthy. So, if ...
AfAp8mEAbuavuHZMc_For_the_Sake_of_Pleasure_Alone.txt
{ "file_size": 21514 }
2a1bfb9d-0a54-456f-8d30-7b1878fcc882
I made a list of mental operations utilized in forecasting, inspired by Scott Alexander and Gwern and I'd like to find out which work the best. If you're a Manifold user with at least 10 bets on your account and 6 minutes to spare, you can fill out my survey here (deadline: March 8). You can also bet on the results on ...
eTNaFuuujoQGjHYgx_Thoughts_that_prompt_good_foreca.txt
{ "file_size": 392 }
fb9faf1a-369b-4c60-a5ad-c2115fbade0e
TL;DR: Representation engineering is a promising area of research with high potential for bringing answers to key challenges of modern AI development and AI safety. We understand it is tough to navigate it and urge all ML researchers to have a closer look at this topic. To make it easier, we publish a survey of the rep...
6mCDnZWjrQNMkqdiD_Representation_Engineering_has_I.txt
{ "file_size": 6523 }
cd6af302-4550-49d6-b6df-7d5c78fd2d07
Author note: This is basically an Intro to the Grey Tribe for normies, and most people here are already very familiar with a lot of the info herein. I wasn't completely sure I should post it here, and I don't expect it to get much traction, but I'll share it in case anyone's curious. Introduction This post is about tri...
9ijjBttAN4A3tcxiY_The_non-tribal_tribes.txt
{ "file_size": 30232 }
57817791-6aa9-49a1-aef9-7b2713ad88b3
Abstract Sparse Autoencoders (SAEs) linearly extract interpretable features from a large language model's intermediate representations. However, the basic dynamics of SAEs, such as the activation values of SAE features and the encoder and decoder weights, have not been as extensively visualized as their implications. T...
ATsvzF77ZsfWzyTak_SAE_Training_Dataset_Influence_i.txt
{ "file_size": 36649 }