In 1942, science fiction author Issac Asimov created ethical rules for robots that only existed in his mind.

Here are his Three Laws of Robotics:

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey the orders given by a human being, except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence if such protection does not conflict with the First or Second Law.

These hierarchical laws are designed to ensure robots act safely and ethically in the presence of humans, with the First Law taking precedence over the Second Law, and the Second Law taking precedence over the Third.

But those of you who saw the 2004 movie, I, Robot, saw these laws turned upside down by VIKI (Virtual Interactive Kinetic Intelligence) whose misinterpretation of the Three Laws put humans under a benevolent but authoritarian dictatorship until hero Detective Spooner (played by Will Smith) killed her.  

Evidently, I’m not the only one who worries about this amid the artificial intelligence race.

Anthropic CEO Dario Amodei is involved in a major clash with the Pentagon over concerns about AI model Claude being used to run autonomous weapons and widescale population surveillance.

And according to my ChatGPT search, many AI executives are excited about AI’s potential but increasingly describe it as a top risk to their businesses, not just an opportunity.

Here are some main concerns executives cite, according to my AI search:

  • Data security and confidentiality: Leaders worry about sending sensitive data to external AI systems and cloud providers, seeing this as an “unacceptable risk” in some industries. Many surveys now show data protection and security as the single biggest AI-related concern. (Newsweek)
  • Compliance, privacy and regulation: Executives flag data protection compliance as their top regulatory issue when adopting AI and expect more oversight and regulation, but often feel their own governance is lagging behind adoption. (Forbes)


“Today’s science fiction is tomorrow’s science fact.” isaac asimonv

  • Ethical misuse and bias: Almost all surveyed tech leaders say they fear unethical AI use (bias, unfair decisions, misuse of generated content), yet fewer than half have robust internal oversight processes in place. (Harvard)
  • Unrealistic expectations and return on investment risk: CIOs and CTOs report that boards and CEOs often expect quick, transformative returns from AI; they worry about “random acts of AI” pilots that don’t scale and about overinvesting without clear business value. (Harvard Business Review)
  • Strategic and competitive risk: Many CEOs now rank AI itself as their biggest business risk, above geopolitical turmoil or cyberattacks, because underinvesting risks falling behind rivals while overinvesting risks financial underperformance. (CFO Dive)
  • Workforce impact and skills: Executives acknowledge that AI could displace or significantly change many jobs, which dampens some of their enthusiasm and raises concerns about re-skilling and employee morale. (Yahoo)
  • Reliability, errors and trust: A majority of CEOs say they have deliberately slowed AI deployment due to concerns about model errors, malfunctions and the difficulty of trusting AI systems in high-stakes use cases. (World Economic Forum)
  • Environmental and infrastructure costs: Some tech leaders point to the energy and water demands of large data centers as a growing concern, both for cost and sustainability.​ (Yahoo)

Anthropic has argued that Claude is not yet ready to control autonomous weapons — that human oversight is needed.

While I understand the Pentagon’s desire to have cutting-edge technology, especially while the United States is in a technology race with China, it seems prudent to listen to Claude’s inventors when they say it’s not ready for prime time.

While this issue was being discussed on CNBC recently, anchor Joe Kernen quipped that he’s not even comfortable getting into a fully automatous car. I had to agree.

AI holds lots of promise in a wide range of areas, including health care, finance, automating routine document handling, customer service and more — with the potential to produce significant productivity gains.

But it also comes with some downright scary potential pitfalls. Many fear rapidly rising unemployment as workers are displaced by AI “bots.” And I’d hate for us to wind up serving an AI master like portrayed in I, Robot — or even like when “HAL 9000” refused to open the bay doors for Dave in 2001: A Space Odessey.

And I sure as heck hope the AI developers are making sure Asimov’s Three Laws are firmly rooted and impervious to change or interpretation. The fact that he saw this more than 80 years ago should not be overlooked.

Retired financial adviser Kirk Greene served hundreds of individuals, businesses and nonprofit organizations over his 40-year career. In 2020, he sold the Seattle-based registered investment advisory firm he founded to his partners and returned to Santa Barbara, where he grew up. He is an alumnus of Seattle University and earned ChFC and CLU designations from the American College of Financial Services. Kirk is past
president of the Estate Planning Council of Seattle and has been an active Rotarian for more than 25 years. The opinions expressed are his own, and you should consult your own financial, tax and legal advisers in thinking about your own planning.