Why We Need a Healthy Skepticism of AI in Project Controls
Discussion about artificial intelligence (AI) and its uses–including worries about impact on the workforce and ethics—increased in 2024. The conversation spans all industries, including those that utilize project controls and project management. It’s common to hear statements about Ai “being a total game changer,” with the potential for taking over major functions of all kinds of roles.
AI is of both personal and professional interest to me. I enjoy experimenting with it and do see plenty of opportunity. Yet, I think it’s critical that we maintain a healthy level of skepticism about these technologies when it comes to our work in project controls. For example, I recently uploaded a process document to an AI tool and requested it find an answer within the document. It did so successfully but could not provide me with the page number where this answer was so that I could go back and verify the information. The AI could filter data and provide an answer, but it could not show me where that answer came from or give important context (the page number).
This is a perfect example of why we need to maintain a healthy skepticism of AI, especially in the project controls world. Trying to apply it to everything removes a key human element and the ability to cross check, both things critical to success in project controls. Let’s dive into specific reasons to be an AI skeptic: incorrect definitions of what AI is and is not; how these tools interpret inputs; and data hallucination.
What AI is and isn’t
Artificial intelligence is ultimately exactly that: artificial. We are still very, very far away from an AI that is an autonomous equal to the human brain.
When most people refer to AI today, they’re referencing things like ChatGPT, which are large language models (LLMs). LLMS are deep learning models trained on vast amounts of data. They use natural language understanding and natural language processing. This distinction is important. When we talk about AI, we’re not discussing humanoid robots that can replace us in our jobs. As it stands today, we’re talking about LLMs that require inputs and education from us to evolve.
LLMs’ interpretation of inputs
LLMs interpret requests and language based on what they were trained on. This isn’t necessarily the same thing as critical thinking like a human does. LLMs are trained on “tokens,” which then make up the actual “ideas” the AI is.
The words you send to an AI or LLM don’t mean the same thing they would to a human. This is due to how AI interprets context differently than a human. If you ask someone to think of an apple, they’ll likely picture a single apple. Maybe they’ll have some individual experiences with apples that add context about flavor or scent, but generally it will be just an apple. An AI, however, will add context based on the images of an apple that it was trained on. This may generate an apple on a plate or in a basket, for example. Neither of those things are the apple, but AI will add them based on the context it knows.
AI likes to “put a bow” on the outputs it gives you. Producing an apple on a plate when answering a request for an image of an apple is more “neat” answer, showing what AI perceives to be a fuller picture – even if it basically conjured up that plate from nowhere. This “bow” is really useful when writing formal emails but concerning when handling data or summarizing text for a project.
There is both an imperfect art and imperfect science to making AI requests to get the response you want. How an AI is trained narrows down the possible outputs it can give you. It might not know how to read your input, or certain words might be so powerful that the AI renders other words irrelevant. While this can make experimenting with things like image generators fun, this is a problem when using AI to work with budget data from complex projects.
Data hallucination
AI struggles with appropriate interpretation can lead to outright hallucination of data. Data hallucinations happen when an AI or LLM interprets a nonsensical pattern based on the inputs given and provides a false and often illogical answer. A recent example of this was a user of Google’s Gemini was told you can safely eat one rock a day.
Beyond being false, a major problem is that AI often presents these hallucinations very confidently and authoritatively. It reads as sure of the answer and can cause problems for someone pressed for time and unable to verify.
In project controls, these illogical answers may not be as noticeable initially. Data hallucinations aren’t always grandiose, obviously incorrect statements. They could fly under the radar and cause huge issues later. This is exactly why I wanted an AI tool that could pull an answer from a document AND give me the page number that answer is on. Cross-checking is critical when dealing with complex data sets in your reporting.
Trust, but verify
AI has come a long way and does have a place as a supportive tool in project controls. We can’t just accept it as an infallible tool, however, and must understand the ways hallucinations and misinterpretations happen.
There’s ample opportunity to develop AI tools that can help us improve scheduling and resource forecasting. Several labs out there are doing interesting work with LLMs; team members are encouraged to interact with them and experiment. This could be an important innovation space that could eventually make all our lives just a bit easier in project controls.
In the meantime—stay skeptical, verify, and explore.