top of page

AI at a Crossroads: The Business Case for Private Models & the Workforce of Tomorrow

Artificial intelligence (AI) is evolving fast—faster than most businesses can keep up with. Omega Business Solutions founder, Marcus Blair, recently led a panel discussion to highlight ways that businesses of all sizes can gain a competitive edge.



The Knoxville Entrepreneur Center gathered some of the top AI experts in our area to cut through the hype and provide a realistic view of where things are headed.


AI at a Crossroads Panelists:

Dr. Amir Sadovnik, Research Lead for the Center for AI Security Research (CAISER) at Oak Ridge National Laboratory (ORNL)

Ozlem Kilic, Vice Provost and Founding Dean of the College of Emerging and Collaborative Studies at The University of Tennessee, Knoxville (UTK)

John Derrick, Founder and CEO at Authentrics.ai 


Blair guided the conversation through a variety of topics, including the business case for private AI models, the workforce shifts we should expect, and the larger societal implications of increasingly advanced AI systems.


The case for private AI models

Public LLMs (Large Language Models) are powerful but come with inherent risks. As Ozlem Kilic pointed out, every input you provide could be stored, analyzed, and even repurposed. That’s a major concern for businesses handling proprietary data or sensitive customer information. The panel emphasized the importance of understanding these risks and potentially bringing in AI experts to help establish best practices.


The discussion also touched on the growing feasibility of private AI models. With advancements in chip technology and specialized smaller models, there are many options to give companies control over their AI without relying on external providers. However, as John Derrick noted, data sourcing will remain a challenge to ensure businesses have enough high-quality data to train their models effectively.


Balancing privacy and accessibility

Many businesses struggle with the tradeoff between privacy and ease of use. While public models lower the barrier to entry, they also introduce risks. The panel explained the “right” solution could be different for every company based on the ways that the AI will be used and the nature of the data involved. For example, if your business deals with proprietary information, customer data, or personally identifiable information (PII), a private model is likely the best path forward. As Dr. Amir Sadovnik explained the good news is there are many approaches to choose from, meaning businesses can find a solution that fits their specific needs.


Will every business—and individual—have their own AI?

The panelists agreed that the future isn’t just about businesses adopting AI. They foresee every individual using their own customized AI model, like a personal assistant in their pocket. As technology improves, AI assistants will become as common as smartphones. Kilic believes this will enable more people to become entrepreneurs by helping them overcome technical and skill limitations. She emphasized that human ingenuity, rather than raw technical ability, will be the differentiator in the AI-powered economy.


The workforce of tomorrow: What skills will matter?

AI isn’t just changing how we work—it’s changing what skills are in demand. Kilic emphasized that a college education and other workforce training must include not only how to use AI, but also how to do so safely and ethically. Future professionals should understand how to select and train AI systems and critically evaluate the outputs they generate. As Dr. Sadovnik noted, just as we teach people not to believe everything they read on the internet, we must teach them not to blindly trust AI-generated content.



The risks of AI advancements

The conversation took a serious turn when discussing the potential risks of increasingly advanced AI. As AI generates more content and begins training itself, new challenges will arise. Could AI reach a point where it poses a direct threat to human safety? Dr. Sadovnik explained that some highly respected experts think it’s possible. The panelists agreed that we are at a pivotal point, meaning the decisions we make now are critical to determine what the future impact of AI will be. Derrick noted that we need to carefully consider which systems we allow AI to control.


AI hallucinations: a feature or a flaw?

AI hallucinations—incorrect or fabricated outputs—are often framed as a flaw, but the panel offered a different perspective. Dr. Sadovnik explained that creativity is a necessary feature of AI; the problem arises when users assume AI is always correct. Derrick added that we should focus on building AI systems that can validate their own answers and admit when it isn’t possible to give a definitive answer. In high-stakes areas like national security and healthcare, it’s especially important for AI to explain how it arrives at conclusions, allowing for human review and intervention.


Are ethics and regulation slowing down AI development?

Despite concerns that excessive regulation could cause the U.S. to fall behind countries like China, Dr. Sadovnik pointed out that there are currently very few AI safety regulations in place, so regulation has not been a hindrance to progress. However, as AI continues to advance, discussions around policy and ethics will become increasingly important.


The cognitive trade-off: Will AI make us less capable?

During the audience Q&A, Katelyn Biefeldt with Teknovation raised one of the more thought-provoking questions: If we rely on AI for everything, at what point does our own cognitive ability start to decline? Dr. Sadovnik explained that research suggests this is a potential risk, particularly for younger generations growing up in an AI-driven world. Kilic emphasized the need to continue fostering creativity, critical thinking, and the desire to make a positive impact in students and young professionals. While AI offers convenience, we shouldn’t allow it to replace human curiosity and problem-solving.


Could AI lead to a work-free utopia?

Another audience member raised the possibility of AI eliminating the need for work entirely, but Derrick suggested that the real question is if humans could really live in that scenario. Historically, societies without purpose-driven work have not thrived. While AI may change what we do, humans will always need opportunities to solve meaningful challenges.


Final thoughts

The future of AI is both exciting and uncertain. As Dr. Sadovnik pointed out early in the discussion, it can be a challenge to distinguish between what news is a significant advancement versus what is just marketing and corporate posturing.

This discussion gave us an objective look at what’s coming next, with plenty of food for thought and lots of great ideas to consider. The panel agreed that private models are becoming more viable, workforce demands are shifting, and ethical considerations are more critical than ever. It’s clear that the businesses taking AI adoption seriously will gain a significant advantage in the years ahead.



bottom of page