Why Trump supports Musk’s rivals by Stargate project

Telewellness
5 min readJan 24, 2025

--

The truth about inability of LLMs/Agentic AI to evolve into AGI, and about a tricky plan to tank OpenAI

Multilevel chess game, Sheldon style

Congrats, Elon, now OpenAI would inevitable fail to deliver overmarketed AGI, with all this $500 Bln additional resource, and that would result in Sam Altman demission.

Yann LeCun claims that Sam Altman’s statements on “reaching AGI” to be achieved by scaling data and compute were based on false premises.

“OpenAI’s products package things invented and openly published by others, using tools produced by others. ChatGPT is built with PyTorch (developed by Meta) and uses transformer architectures (developed at Google).” Y.Lecun

After such comments, and by learning the opinions of Strong AI Summit speakers Gary Marcus, Ed Musinski, you can bet on OpenAI to be doomed on 5 years horizon. If Sam not acquires a bunch of startups that have more viable approaches to AGI development.

How to actually reach AGI?

“Getting to human-level AI is not a mere engineering challenge. It is a scientific research challenge and will require contributions from the entire scientific community. That’s why open research is so important. It will emerge progressively and almost simultaneously in many labs. It’s not going to be done by one company or lab.

AI will require a change of paradigm. Merely scaling up GPTs with a few hacks (RAG, token-space planning by selection,e tc) is simply not going to get us there. Also, it’s a *research* problem that a money-losing, product-focused outfit like OpenAI has a hard time devoting resources to. You need a long runway institutional stability, and financial stability. You need to attract the best scientists, which is essentially impossible for a secretive outfit. ” Y. LeCun

At Strong AI Summit will be announced such Cooperative initiative of developing AGI in “Manhattan project” mode, by AGI Alliance, led by Ed Musinski. The idea steams from Emtech AI Rating, presented at event at Davos 2020, in hotel where Trump sayed in 2020.

https://telewellness.medium.com/ai-wars-2020-top-emerging-technologies-to-be-presented-at-wef-1bbcc82b1b14

Now this project of Cooperative AGI development in AGI Alliance is backed by such thought leaders, as Adam Robinson.

https://medium.com/@ed_78550/mission-of-agi-alliance-a-new-manhattan-project-b375bd175174

Next architecture by LeCun: Objective-driven AI and predictive architecture

Yann argued to abandon generative models, probabilistic models, contrastive models, and much of reinforcement learning. What are objective-driven AI and joint-embedding predictive architecture?

Yann described objective-driven AI and JEPA (joint-embedding predictive architecture) as much of the work he is doing at FAIR at Meta. When I asked Yann what he would say to critics of JEPA as an approach, he said that colleagues such as Aravind are more talking about the demonstration of the technology, which is too early to have at this point.

https://www.wing.vc/content/rajeev-chand-yann-lecun-ai-research-predictions

Cooperative AGI development by AGI Alliance starts with definition of AI / AGI for the purpose of self-regulation of industry.

In an era where technology increasingly influences every aspect of society, the definition and application of Generative AI raise pressing ethical questions, particularly within the U.S. judicial system.

In 2025, the legislative and regulatory landscape surrounding AI is evolving rapidly, with a focus on transparency, accountability, and compliance. Based on the analyzed articles and legislative perspectives, here are the most commonly used definitions of AI and key trends:

  1. AI as Automated Decision-Making Tools: AI is often described as technologies that automate decision-making processes, raising concerns about algorithmic discrimination and fairness. Legislation like the Colorado AI Act emphasizes transparency and consumer rights
  2. Generative AI: AI systems capable of creating human-like content (text, images, audio, etc.) are categorized separately, requiring clear guidelines for transparency and ethical use, as highlighted in the Illinois Supreme Court’s AI policy. Illinois Supreme Court AI Policy: Effective January 1, 2025, this policy provides a contextual definition of AI, particularly Generative AI (Gen-AI), as technologies capable of creating human-like text, images, videos, and audio, emphasizing ethical integration into judicial systems
  3. Generative AI Legislation (AB-2013): California’s law regulating generative AI adopts a similar definition, describing AI as a system that “can generate derived synthetic content, such as text, images, video, and audio, that emulates the structure and characteristics of the artificial intelligence’s training data.”
  4. California’s AB-2885: This bill defines AI as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” This definition is consistent across multiple California statutes, including the Business and Professions Code, Education Code, and Government Code
  5. Federal Definition (15 U.S.C. 9401(3)): The Executive Order “Removing Barriers to American Leadership in Artificial Intelligence” (January 23, 2025) adopts the definition of AI from 15 U.S.C. 9401(3), which is rooted in the National Artificial Intelligence Initiative Act of 2020 (NAII). This definition emphasizes AI as a technology that enables machines to perform tasks typically requiring human intelligence, such as learning, reasoning, and problem-solving
  6. Biden Administration’s Executive Order 14110: Although revoked by President Trump in 2025, this executive order defined AI broadly as “machine-based systems that can, for a given set of objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” It emphasized safety, security, and ethical consideration
  7. Algorithmic Discrimination Mitigation: Laws are increasingly focusing on preventing bias in AI systems, particularly in hiring, lending, and healthcare.

What’s wrong with Agentic AI?

AI agents currently fail to perform effectively in real-world applications. Core issues stem from their reliance on strong, scalable language models, which are often inadequate. Compounding Errors in Tasks: Complex tasks executed by agents are prone to error compounding, drastically reducing output accuracy. Even small errors can escalate, leading to significant failures in output quality. Rising Costs of Implementation: Transitioning to stronger language models can lead to exponentially higher operational costs. Ongoing validation of outputs requires additional powerful models, increasing financial burdens. Non-Deterministic Outcomes: Using AI agents shifts software development from deterministic code to non-deterministic model outputs. This change complicates deployment and may render solutions costly and less reliable. Businesses’ Reluctance to Change: Companies are unlikely to replace reliable operations with error-prone AI agents for critical processes.. It encompasses a wide range of mental functions — such as perception, attention, learning, memory, language comprehension, reasoning, decision-making, and problem-solving

What Agentic AI looks more promising?

1. Yugen AI Engine by Paul Bronstein

A psychologist AGI agent that is you and that basically can help you out solve your problems, backed up by an army of real psychologist that would assess your situation. With reward mechanisms to help the user to improve on daily basis

2. https://abacus.ai/chat_llm-ent — this one has great description on site but I personally didn’t use it.

--

--

Telewellness
Telewellness

Written by Telewellness

Our COVID expert platform introduces a new metric of appreciation: “How many lives does one person potentially save, by spreading validated COVID information”

No responses yet