AI in Enterprise Applications: Unleashing new Possibilities

0
6






Joe Zirilli, Vice President, Artificial Intelligence Solutions Architecture, Parsons Corporation

Joe Zirilli, Vice President, Artificial Intelligence Solutions Architecture, Parsons Corporation

I have been developing AI-based systems since the early 1990s, long before AI was widely accepted or trusted. Convincing people of AI’s potential to improve performance was a significant challenge for many years.

Recently, acceptance has grown, but new challenges have emerged. The quality of data used to train AI systems is crucial, what I call “garbage in, garbage learned” (GIGL). Effective data management and curation remain the most difficult aspects of ensuring AI performs as intended. This issue is particularly pronounced with chatbots powered by large language models (LLMs), which require massive datasets, increasing the GIGL risk.

Barriers to Scaling AI in Mission-Critical Environments

Scaling AI in a complex enterprise involves three key functions:

a. Use Case Prioritization: Diverse and overlapping use cases make it challenging to determine which ones offer the most benefit. Establish criteria based on factors like time, cost, capability and productivity, then rank use cases to focus on the highest priorities.

b. Enterprise Data & Intelligence Services Backbone: Gradually making all company data AI-ready is essential. Without this backbone, each use case becomes an isolated project, straining resources. A robust backbone supports rapid development and reduces deployment time as capabilities mature.

  Technology leaders must stay abreast of the latest developments in AI and quantum computing and continually evaluate how to integrate them into their enterprises.  

c. Staffing and Computing Plan: Access to experienced AI personnel is limited and sustaining them is difficult. While hiring thirdparty providers is an option, it can be costly and less agile as requirements evolve.

Balancing Innovation with Responsibility

My perspective on transparency differs from the prevailing view. I believe we will always lag behind in achieving full transparency due to the complexity of AI models. While tools are being developed to improve transparency, true understanding is elusive because of the sheer magnitude of these models. For example, OpenAI’s GPT-3 has 175 billion parameters, which is beyond human comprehension. People will need to learn how to trust AI models just like other humans. However, data privacy, security and ethical AI can be managed with a wide array of available tools but cannot be guaranteed.

Exciting Technologies on The Horizon

We are particularly excited about Agentic AI, especially systems that can reason and are persistent. These AIs can operate autonomously to achieve specific goals and maintain functionality over extended periods. They learn, adapt and improve without frequent human intervention, which is crucial for maintaining deployed AI systems.

In the near future, personal chatbots will become ubiquitous, knowing users better than they know themselves. This data explosion will necessitate significant advancements in networking and computing capacity, driving innovation, particularly in quantum computing. Quantum computing will likely become the cornerstone of future AIs, presenting new challenges.

Technology leaders must stay abreast of the latest developments in AI and quantum computing and continually evaluate how to integrate them into their enterprises. This vigilance is more critical now than ever and will remain so for the foreseeable future.