Capturing Consensus | Nasdaq

0
2


 

Over the past couple of years, we’ve seen the rise of new types of AI, including generative and agentic. How do you see AI continuing to evolve in 2025 and beyond?

AI is AI. While terms like “generative” and “agentic” are helpful in simplifying the technology for the general public, they can also be misleading. These capabilities, such as Natural Language Generation (NLG), have existed for some time and represent just one part of a much broader AI toolbox.

As hardware continues to improve, AI will become more capable, more specialized, and more deeply embedded in our daily lives, similar to how the internet gradually became a foundational technology. A strong example is Medical AI, which is rapidly emerging as a new standard of care. While there have been a few early adopters, we’re seeing a wave of fast followers. Patients are beginning to expect AI-driven capabilities when choosing doctors, hospitals, and insurance providers. Physician acceptance has grown from roughly 35% in 2019 to around 70% today, a significant cultural shift.

Under the Trump administration, many market participants are expecting regulations around AI to change and you have encouraged that the U.S. take a slower approach than European regulators. How do you think regulations will change during this administration?

This administration appears pragmatic and supportive of American business. Overregulation risks slowing innovation, particularly with intense AI competition coming from China and Russia. I would expect the administration to back U.S.-based AI companies like DeLorean AI as strategic assets.

The European Union’s regulatory approach has, in many ways, stifled its own tech sector. Major American tech firms have faced significant regulatory headwinds in Europe, and the region’s AI industry has struggled to remain globally competitive. That should serve as a cautionary tale for us.

Many concerns labeled as “AI ethics” are already covered under existing data privacy laws. Rather than creating new, overlapping regulations, government agencies should focus on enforcing what’s already in place.

Lastly, I would strongly recommend that the administration seek guidance from actual practitioners, those who build and use AI every day, rather than relying solely on commentators or academics who may be removed from the technology’s real-world applications.

When you think about global AI regulations, how can we ensure that regulations won’t hinder innovation and growth?

By their nature, regulations tend to inhibit innovation and growth. However, I believe the foundational guardrails we need are already established through existing data legislation. Enforcement, not expansion, should be the focus.

I’d encourage the U.S. government to proactively support domestic AI companies in several key areas:

  • Supply chain security: Ensure we have the materials needed for hardware – rare earths, chips, servers.
  • IP protection: Safeguard American innovation. If foreign actors engage in IP theft, their U.S.-based representatives should be held accountable.
  • R&D incentives: The current R&D tax credit is underpowered. We need more meaningful incentives for AI innovation.
  • Talent strategy: In the short term, expand H1B visas. In the long term, we must strengthen STEM education and ensure our universities are producing AI-ready talent.

Finally, we must make it easier for government agencies and private companies to adopt AI tools. This is how we stay competitive.

In your recent TradeTalks discussion, you called out the need for hardware and greater server capacity as AI continues to develop. What are your expectations for storage growth over the next year and next decade?

The only honest answer is: Yes, and the growth will be exponential.

From a national security and economic standpoint, we must secure access to the raw materials and skilled labor needed to build and operate chip and server infrastructure. My colleagues and I are already exploring locations that offer the power capacity necessary to host these server farms.

This demand presents a compelling opportunity to integrate renewable energy sources into the AI infrastructure. For example, old factory sites in New England could be revitalized using hydroelectric power. There’s tremendous potential for sustainable growth in this sector.

You also mentioned that AI itself can’t be biased. Can you elaborate on how companies can ensure there’s no bias in the datasets used to create their AI models?

That’s correct, AI itself, as a machine, isn’t inherently biased. Any bias comes from the data it’s trained on. And this is where things get complex.

First, companies must regularly audit their models to ensure there’s no bias related to legally protected classes. We already have regulations that require this. Second, well-trained scientists who adhere to the scientific method understand the importance of designing balanced datasets from the start.

It’s also important for clients to ask the right questions of their vendors. Transparency in model development and training data is critical.

That said, sometimes the data reflects an inherently homogenous population. For example, a model trained on data from Iceland, where the population is relatively uniform, may not perform well if applied to a diverse region like Orlando. This isn’t bias in the model, but rather a mismatch in training data versus application context.

What can companies and policymakers do today to prepare for the next wave of AI innovation?

Companies need to invest in AI literacy at the leadership level. Too often, we see CEOs delegating AI decisions to CIOs who are experts in IT, but not in AI. That’s a critical misalignment. You need decision-makers who understand the unique nature of AI technology.

Also, there’s no need to reinvent the wheel — buy proven AI products. Building custom solutions in-house may not make sense for some industries, such as healthcare, where AI isn’t the core competency.

For policymakers, it’s crucial to seek input from primary sources, people who are building and using AI, not just theorists or strategists. Real-world practitioners offer the most grounded, actionable insights. And most importantly, policymakers must focus on enabling and nurturing the U.S. AI industry, rather than over-regulating it.