Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm guessing they have a lot of shares in the AI companies they work(ed) for, and they would like to pump their value so they can buy an even nicer carribean island than they can already afford?


Kokotajlo gave up all his shares in OpenAI as part of his refusal to sign a nondisparagement agreement with OpenAI.


Kokotajlo in particular is notable for being the guy who quit OpenAI in 2024 in protest of their policy of requiring researchers to abide by a non-disparagement agreement to retain their equity. In the end OpenAI caved and changed their policy, but if he was lying all along to inflate the value of his shares, it would have been quite a 4d chess move of him to gamble the shares themselves on doing so.


Isn't it just that he left way before gpt-5, then? At that point a sufficiently naive person could have believed that scaling was going to lead to AGI, but that sort of optimism died after he was already an outsider.


Kokotajlo still believes we get AGI in the next few years. These are his most updated numbers at the moment: https://www.aifuturesmodel.com/


I love the total lack of humility on that site. "What if the METR study turns out not to capture anything relevant? We just add a constant gap to be conservative!". But I guess these guys aren't really scientist, so it's probably a lot to ask that they relate critically to what they are doing and be honest about the limitations of their methods.

What if it turns out that the more you scale the more your LLM resembles a lobotomized human. It looks like it goes really well in the beginning, but you are just never going to get to Einstein. How does that affect everything?

What if it turned out that those AI companies were maybe having a whole bunch of humans solving the problems that are currently just below the 50% reliability threshold they set, and do fine tuning with those solutions. That will make their models perform better on the benchmark, but it's just training for the test... will the constant gap be a good approximation then?


Not quite.

Kokotajlo quit because he didn't think OpenAI would be good stewards of AGI (non-disparagement wasn't in the picture yet). As part of his exit OpenAI asked him to sign a non-disparagement as a condition of keeping his equity. He refused and gave up his equity.

To the best of my knowledge he lost that equity permanently and no longer has any stake in OpenAI (even if this episode later led to an outcry against OpenAI causing them to remove the non-disparagement agreement from future exits).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: