6 Comments

Ilya Sutskever's departure has caused a significant decline in OpenAI's safety research.

Expand full comment

It might have been in decline for some time now. His departure was just the final step that gathered a lot of attention.

Expand full comment

You are right.

Expand full comment

I am wondering how often this happens in many organizations (even whether A(G)I focussed ir not), today, when different strategic decisions, and ensuing actions - are prioritized over time.

To prioritize products/services/saas/models to capture market share for “bandwith” (the shortages in chips or datacentres and technical expertise) is likely to be the goal of many A(G)I focused businesses today.

To quote, “struggling for compute…” …“getting ready for next generation of models on security, monitoring, preparedness, safety, adversarial robustness, super(alignment), confidentiality, societal impact and related topics.” assumes models preceed preparedness for safety, super(alignment) etc etc.

“Bandwith balance” is tough to fingerpoint, when the (multi-)modality space is evolving so fast.

Heavy is the crown any CEO of an (A(G)I) profit-centric business wears.

However, should such a “security, monitoring, preparedness, safety, adversarial robustness, super(alignment), confidentiality, societal impact and related topics” A(G)I centric business evolve, then this focus alone would act as a counter-balance on one “bandwith” and perhaps “profit or bottom line” focussed. Such an entity would separately need to be a non-profit, and funded in such a way to understand the bandwith requirements to act as a necessary counter-balance. That would have to be its focus. Both cannot exist together in one space - or at least it will be challenging to do so.

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

Expand full comment

Thanks for sharing that article. It is very interesting.

Expand full comment

Pleasure Conrad

Expand full comment