Chatbots and artificial intelligence (AI) dominate today's society and show what's possible. However, some people don't realize the AI platforms they love to use might be receiving their knowledge from humans. The phenomenon of so-called pseudo-AI happens when companies promote their ultrasmart AI interfaces and don't mention the people working behind the scenes as fake chatbots.
Speaking broadly, pseudo-AI and fake chatbots have only been around for a few years at most. That makes sense, since both AI and chatbots — which use AI to work — have recently reached the mainstream.
There's no single answer for why businesses started venturing into the realm occupied by pseudo-AI and fake chatbots, but saving money inevitably becomes part of the equation. Human labor is cheap and often easier to acquire than the time and tech needed to make artificial intelligence work properly.
Some companies begin by depending on humans because they needed people to train the algorithms by using technology in ways similar to real-life situations. Humans are always involved in AI training to some degree, so that isn't unusual.
Unfortunately, though, in their eager quest to gain the attention of wealthy investors, some companies give the impression their platforms or tools are already past the stage of needing such thorough training and are fully automated.
That's called "The Wizard of Oz" design technique, because it reminds people of the famous movie scene where Dorothy's dog, Toto, pulls back the curtain and reveals a man operating the controls for the Wizard's giant talking head.
Some scheduling services that used AI chatbots to book people's appointments reportedly didn't mention they required humans to do most of the work. Workers would read almost all incoming emails before the AI generated an auto response. Employees often are hired to be an AI trainer. Then, it seems like they're only involved in helping the AI get started, not overseeing the whole process.
Elizabeth Holmes, the CEO of the now-disgraced blood testing company Theranos, is a perfect example of how much prestige a person or tech company can gain without solid technology to show to the public. Holmes had fantastic ideas for her products, but received early warnings from development team members that the things she envisioned were not feasible.
The company captured attention from impressed members of the media even though Holmes didn't have working prototypes for most of her planned features. One of the reasons Theranos avoided ridicule for as long as it did is the culture of secrecy in the tech sector. As a company works on a project that could become the next big thing, it has a vested interest in staying quiet about its operations and what's in the pipeline.
As such, tech investors may be more willing not to press company leaders for details about their AI, making it easier to supplement projects with humans. People have raised concerns about how to make AI that's ethical. That's important, but when they think about ethics for AI, individuals don't typically think of pseudo-AI.
It's increasingly important for people to be as informed as they can about whether the AI they're using is fully automated or is a type of pseudo-AI. Fortunately, there are things to check for that can help find the real stuff.
If a solution is transparent to the user and lets them see how it works, that's a good sign. Likewise, if a company provides substantial details about its technology and functionality, it's more likely it doesn't depend on humans too much.
People can also find out if the AI does things for users or only provides insights. If it carries out tasks and does so more efficiently than humans, that constitutes real AI.
When startups have datasets of unique and specialized information, the likelihood goes up that they're using real AI. Many companies that try to promote something fake focus too much on automation and not enough on the information that helps the algorithm work. Keep in mind that automated technology needs instructions to work, but true AI learns over time from the content it's trained with and its future interactions.
People have various definitions when they describe artificial intelligence. Perhaps that's because experiments and progress happen at a rate that makes it difficult to pin down what AI can do or might in the future.
Some companies have taken advantage of that lack of definition. In China, a Beijing-based company partnered with a state news agency and built what it presented as an AI news anchor. It used machine learning algorithms to teach the AI about a real news anchor's likeness and voice, and then fed the AI content needed for reading the news.
People soon asserted that the anchor was a digitally synthesized person constituting only a very narrow use of AI. Some pointed out that it was nothing more than a digital puppet.
That hasn't stopped companies from creating a slew of digital people, often celebrities that have passed away. One company uses digital scanners to capture every detail of a person's face down to the pores and how blood flow causes complexion changes during some facial expressions.
There's no problem with aiming to achieve that level of accuracy when the audience is fully aware that the "person" they're seeing is a digital rendition. However, critics mention how we might soon have a culture of false celebrities to go with the fake news that's rampant.
People must be cautious about believing new technology is real just because it's so amazing. Some AI is authentic, but there are plenty of cases where things are not quite as they appear.