In today’s rapidly evolving technological landscape, artificial intelligence (AI) is reshaping industries, presenting both challenges and opportunities. Recently, at a session held by the UK Parliament’s APPG on Artificial Intelligence, renowned AI expert Yann LeCun shared valuable insights about the future of AI and its implications for investment managers. His testimony underscores the importance of understanding AI capability, control, and economics, which are often treated in isolation but should be seen as interconnected.
The landscape of AI risks has shifted significantly. The primary concerns are no longer about which entity can train the largest models or acquire the most advanced hardware. The focus has moved towards governance of AI systems, where information is stored, and whether current heavy investments in large language models (LLMs) will yield satisfactory returns.
Sovereignty and AI risks
LeCun highlighted the most pressing risk in the AI domain: the concentration of information control within a few proprietary systems. This poses a serious threat to national security and to businesses overly dependent on these platforms. When a limited number of proprietary systems mediate research and decision-making processes, the resulting dependency risk escalates. Over time, trust, resilience, and data confidentiality may erode, diminishing the bargaining power of organizations reliant on such systems.
Mitigating sovereign AI risks
To counteract these risks, LeCun proposed the concept of federated learning, a framework where models can be trained without directly accessing the underlying data. This approach focuses on exchanging model parameters while keeping sensitive data localized. By doing so, organizations can benefit from improved models that simulate accessing a comprehensive dataset without compromising their data sovereignty.
However, implementing federated learning is not a straightforward task. It demands robust orchestration among various parties involved and necessitates secure cloud infrastructures at national or regional levels. While this approach mitigates some risks, it does not eliminate the necessity for sovereign cloud capabilities and ongoing capital investment.
Strategic vulnerabilities of AI assistants
As AI becomes increasingly integrated into daily operations, LeCun emphasized the importance of diversifying the sources of AI assistants. These tools are evolving beyond mere productivity enhancers; they will increasingly influence the flow of information, shaping user decisions and perspectives. The concentration of power within a small number of AI assistants can lead to significant societal risks, reinforcing existing biases and leading to homogenized analyses.
Cloud dependence and edge computing
While some AI processes may run on local devices, many tasks still rely heavily on cloud capabilities. From a sovereignty perspective, deploying AI at the edge may alleviate some workloads, but it does not fully address the underlying issues of jurisdiction, privacy, and security. These challenges continue to necessitate a careful examination of control over data and computational resources.
Overstated capabilities of LLMs
LeCun cautioned against overestimating the intelligence of current LLMs, noting that fluency in language can be misleading. While these systems are adept at processing and generating human-like text, this does not equate to genuine reasoning or understanding. This distinction is crucial for developing agentic systems that rely on LLMs for planning and execution.
Current AI investments should be scrutinized to determine how much is directed towards building robust intelligence rather than merely enhancing user experiences through statistical pattern matching. The focus must shift towards creating models that can genuinely understand and predict real-world consequences rather than merely replicating language patterns.
The future beyond LLMs
LeCun introduced the notion of world models, which concentrate on understanding the dynamics of the real world instead of merely correlating language. This approach aims to predict the outcomes of actions, differentiating between superficial pattern recognition and deeper causal understanding. While existing architectures won’t disappear, they may not be the ones to deliver sustainable productivity gains in the future.
The landscape of AI risks has shifted significantly. The primary concerns are no longer about which entity can train the largest models or acquire the most advanced hardware. The focus has moved towards governance of AI systems, where information is stored, and whether current heavy investments in large language models (LLMs) will yield satisfactory returns.0
Regulatory considerations
The landscape of AI risks has shifted significantly. The primary concerns are no longer about which entity can train the largest models or acquire the most advanced hardware. The focus has moved towards governance of AI systems, where information is stored, and whether current heavy investments in large language models (LLMs) will yield satisfactory returns.1
The landscape of AI risks has shifted significantly. The primary concerns are no longer about which entity can train the largest models or acquire the most advanced hardware. The focus has moved towards governance of AI systems, where information is stored, and whether current heavy investments in large language models (LLMs) will yield satisfactory returns.2
