AI and Finance - Terminator or Germinator ?

Should we fear that AI will cut off humans (Terminator) or will humans be able to use AI in finance to come to novel solutions helping investors to nourish (Germinator) optimal investment approaches?

Paul

6/24/20255 min read

AI : Terminator or Germinator?

Much has been, is and no doubt will be written about AI, by both humans and AI systems themselves. The scope of this blog post (not written by AI) is limited to AI and finance, more specifically portfolio investment. How will Banks, Portfolio Managers, Pension Funds, Insurance Companies, Family Offices, Corporate Treasurers, Fund Managers and individual investors alike navigate with AI capacities evolving at the speed of light?

Should we all be afraid that AI will totally take over and cut off humans (Terminator), or, on the contrary, will humans be able to use AI in the field of finance to come to novel solutions and approaches helping investors to identify, build and nourish (Germinator) optimal investment approaches?

History has shown that those who adapt thrive; those who cling on to ageing technologies perish over time. Looking back to see how we came to this point can help us to understand where we are going and what opportunities lie ahead.

  • Just a few decades ago financial information was mainly distributed by the printed press    (WSJ etc.) and investors went to their bankers for information.

  • At the turn of the millennium the birth of internet drastically changed this situation with nearly real-time information becoming a readily available commodity.

  • Next disruption was the venue of Google and other search engines allowing users to obtain targeted information on any subject provided you used a good set of key words. But it is up to the user to process the resulting information and draw own conclusions.

  • And very recently we saw AI entering the arena with chat agents like ChatGPT amongst others, where information is pre-processed and presented in a way defined by the user. Here, key to success is the ability to write good prompts.

  • The next and even larger disruption is already on the doorstep, giving individual users the possibility to build personalized AI agents themselves.

AI Agents

Okay, so AI agents are the next big thing, but what on earth is an AI agent and how is this different from “normal” AI?

Unlike the by now common and popular Chatbots like ChatGPT, DeepSeek or Perplexity, an AI agent has the power to scrape and structure relevant information from multiple web sources, process this information in an organized manner in order to make suggestions or decisions and present those in predefined formats and, if so allowed, even execute these without human intervention. AI agents must have a memory of past decisions and learn from earlier reasoning and mistakes. In short, you create an autonomous and automated intelligent process.

But as things stand today, building an AI agent still requires quite some human input to create and set up such a personal AI agent who can operate and decide autonomously without human interference on your behalf. And that is exactly where the danger is hiding.

Who is holding the key?

Assuming building a personal AI agent is easy (spoiler alert: it is not yet), one could consider AI as a “Germinator”, i.e. a self built system that helps you in your personal goals and quests. As long as the creator keeps control over his creation, we are not in a “Terminator” or Frankenstein scenario. And exactly this is the key differentiator between “Terminator” and “Germinator”.

When you allow an AI agent to decide AND execute its decisions, you are skating on extremely thin ice, and it is the physical person who will eventually pay a potentially costly price for misinterpretations, erroneous reasoning or processed ghost information of the AI Agent. Famous last words: what can possibly go wrong?

Ergo, as long as your Agent cannot execute autonomously, we are on Germinator territory.

Future of AI agents in finance

We cannot think or wish AI agents away anymore. They are here to stay until the next big thing comes up. Today it still helps to be IT-savvy to build such an agent (your personal independent assistant) but as technology evolves it will become easier over time for non-professionals to imagine, create and activate their own AI Agents, in any field of life they wish. Including finance and investing.

A few non-finance examples: some people created their very own AI virtual friend or lover to talk with, or even built their AI psychologist to help them live or just survive in our complex society. However, does your AI friend, health coach, psychologist or your self-built financial adviser replace real life people and experience? Certainly not, but this is undeniably a new reality in which it remains crucial to be aware at all times you interact with a “machine” that cannot be held responsible. That also begs the question: where are your sensitive data stored and who has access to it? Can you trust your AI friend or adviser?

Especially in finance, a crucial safeguard is that when your AI system suggests actions to take, these must be verified and approved prior to execution, which your system could still do from there on if you wish so. So once again the answer to the Terminator / Germinator question really boils down to who holds the main key. I strongly believe that the keeper of that gate should without exception be us humans. This being said, and when built this way, AI agents are highly likely to turn out to be most helpful for Family Offices, Fund Managers and the like, giving you a real competitive edge over non-adapters.

How easy is it to build a Finance AI Agent ?

At present, it is possible but still far from easy to create a decent AI agent that helps you to build and manage your investment portfolio. Real time financial and macro-economic data must be scraped from various web sources on a daily basis, structured and processed along the lines of the methods you explained to your AI agent. It still remains a relatively complex undertaking, but that may very well change over a relatively short time.

As things currently stand, building a complete functional model often involves API keys, sometimes a bit of code (python) and a variety of sources to tap into to obtain the info to be processed along your predefined algorithms.

The future is now: software already exists that allows the creator of an AI agent to organize all necessary processes and info sources, even include other existing AI agents in the setup, as well as organize the resulting output in desired formats. Expect an explosion of personal AI agents when this type of software becomes more intuitive, more common, easy to understand and freely available.

This is where the future development of AI Financial Agents lies. I am convinced it will not take years to reach the inflection point where non-geeks can readily build their very own AI agents helped by this new type of software.

Conclusions

Whether AI Agents will behave as Terminators or Germinators ultimately depends on who holds on to and controls the execution key. It is perfectly okay that a system autonomously comes to a given decision, which may be totally wrong for all kinds of reasons (like processing ghost information for instance). Knowing this, it is absolutely crucial that such a system learns from each mistake - without having executed such a decision - and incorporates that knowledge in its future reasoning.

IF, and only if, all of the above points are respected, I am comfortable to say that such an AI agent will be a Germinator. Else, sooner or later that system will behave like a terminator. The good news is that we hold the key if we want to. Winners will be those who remain curious as to how this technology can be put to good use, and how to incorporate it into their daily lives.

Remember: YOU hold the key.

If you have ideas or thoughts to share, please click here:

 

You like this type of content ? Join us and subscribe for free here :

Stay curious and invest wisely – more than ever!