It’s been an exciting few months since OpenAI released ChatGPT, which everyone is now talking about, many talking to and looking to see what happens next.
No wonder. ChatGPT has raised the bar for what computers are capable of and is a window into what is possible with AI. And now that tech giants Microsoft, Google and now Meta are joining the race, we all need to buckle up for an exciting but potentially bumpy ride.
At the core of these capabilities are Large Language Models (LLMs), specifically a specific generative LLM that makes ChatGPT possible. LLMs are not new, but the rate of innovation, opportunity, and scale are evolving and accelerating at breakneck speed.
Peek behind the AI curtain
There’s also a lot going on “behind the scenes” which has led to confusion, with some mischaracterizing ChatGPT as a Google killer or that generative AI will replace search. Vice versa.
First, it is important to distinguish between search and generative AI. The purpose of search is to search for information: to discover what already exists. Generative AI and applications like ChatGPT are generative, creating something new based on what the LLM has learned.
ChatGPT is a bit like a search because you interact with it with natural language conversational questions and it responds with well-written prose and a very confident response. But unlike search, ChatGPT does not retrieve information or content; instead, he creates an imperfect reflection of the material he already knows (on which he has been trained). In fact, it is nothing more than a hodgepodge of words created on the basis of probabilities.
While LLMs will not replace search, they can complement search. The real power of using generative LLMs for search is convenience: summarizing the results in a concise, easy-to-read format. Combining generative LLMs with search will open doors to new possibilities.
Find a testing ground for AI and LLMs
Generative models based on LLM are here to stay and will revolutionize the way we do things. Today’s lightweight fruits are synthesis – making lists and writing summaries of common topics. Most of these features do not fall into the search category. But the search experience will be transformed and shared by specialized LLMs that cater to specific needs.
Thus, among the excitement surrounding generative AI, LLM and ChatGPT, there is one prevailing moment: search will become a testing ground for AI and LLM. This is especially true for corporate search. Unlike B2C applications, B2B and business applications will have a much lower resistance to inaccuracies and a much higher need to protect sensitive information. The adoption of generative AI in enterprise search will lag behind web search and will require creative approaches to meet specific business challenges.
In this regard, what lies ahead for enterprise search in 2023? Here are five themes that will shape the future of enterprise search in the coming year.
LLMs Expand Search Opportunities
Until recently, using LLM for search was expensive and cumbersome. The situation changed last year when the first companies began to include LLM in corporate search. This led to the first major leap forward in search technology in decades, resulting in faster, more targeted, and more forgiving search. And yet we are only at the beginning of the journey.
As better LLMs emerge and as existing LLMs are customized for specific tasks, we can expect rapid improvements in the power and capabilities of these models this year. It’s no longer about finding a document; we will be able to find a specific answer in the document. We will no longer need to use only the correct word, but information will be extracted based on the value.
LLMs will do a better job of finding the most relevant content, which will bring us more targeted results and will do it in natural language. And generative LLMs promise to synthesize search results into easily digestible and understandable summaries.
Search helps fight the loss of knowledge
The loss of organizational knowledge is one of the most serious yet under-reported issues facing businesses today. High employee turnover, whether it be voluntary layoffs, layoffs, M&A restructurings, or layoffs, often results in knowledge being left behind on information islands. This, combined with the shift to remote and hybrid work, dramatic shifts in customer and employee perceptions, and the explosive growth of unstructured data and digital content, has placed a huge strain on knowledge management.
In a recent survey of 1,000 IT managers in large enterprises, 67% said they were concerned about the loss of knowledge and experience when employees leave the company. And this cost of knowledge loss and ineffective knowledge sharing is high. IDC estimates that Fortune 500 companies are losing roughly $31.5 billion a year because they don’t share knowledge—an alarming number, especially in today’s uncertain economy. Improving search and retrieval tools for a Fortune 500 company with 4,000 employees could save approximately $2 million per month in lost productivity.
Intelligent Enterprise Search prevents information islands and makes it easy for organizations to find, display and share information and corporate knowledge about their top performers. Finding knowledge and experience in the digital workplace should be smooth and easy. The right enterprise search platform helps employees gain knowledge and experience, and even unify disparate information repositories to facilitate search, innovation, and productivity.
Search eliminates app splits and digital friction
Employees today are drowning in tools. According to a recent Forrester study, organizations use an average of 367 different software tools to create data warehouses and disrupt processes between teams. As a result, employees spend 25% of their time searching for information instead of focusing on their work.
This not only directly impacts employee productivity, but also impacts customer revenue and results. This “application splitting” exacerbates information silos and creates digital friction due to constant application switching, shifting from one tool to another to get the job done.
According to a recent Gartner survey, 44% of users made the wrong decision because they didn’t know information that could help, and 43% of users reported missing important information because it got lost among too many apps.
Intelligent enterprise search unifies the capabilities of employees so that they can seamlessly and accurately access all corporate knowledge from a single interface. This greatly reduces the number of switches between applications, as well as reducing the frustration of an already tired workforce, while improving productivity and collaboration.
Search becomes more relevant
How often do you find what you are looking for when you are looking for something in your organization? Nearly a third of employees report that they “never find” the information they are looking for, always or most of the time. What are they doing? Guess? Are you inventing? Charging forward in ignorance?
Search relevancy is the secret sauce that allows scientists, engineers, decision makers, knowledge workers, and others to find the knowledge, experience, and insights they need to make informed decisions and take more, faster action. It measures how closely search results match the user’s query.
Results that better match what the user is hoping to find are more relevant and should appear higher on the results page. But today, many enterprise search platforms lack the ability to understand user intent and return relevant search results. Why? Because it is difficult to develop and configure it. So we live with the consequences.
Smart enterprise search tools work much better and provide much more relevant results than in-app searches. But even they can struggle to handle complex scenarios, and the desired results may not be at the top of the list. But the advent of LLM opened the door to vector search, the extraction of information based on meaning.
Advances in neural search capabilities include LLM technology in deep neural networks: models that include context to provide superior relevancy through semantic search. Moreover, the combination of semantic and vector search approaches with statistical keyword search capabilities ensures relevance in a wide range of enterprise scenarios. Neural search takes the first step towards changing relevance over decades so that computers can learn to work with people, and not the other way around.
Question-answer methods receive a neural impulse
Have you ever wanted your business search to work like Google? Where could you get an answer right away, instead of first finding the right document, then finding the right section, and then going through the paragraphs to find the right information? For simple questions, wouldn’t it be nice to just get a straight answer?
Through LLM and the ability to work semantically (meaning-based), the ability to answer questions (QA) is available in the enterprise. Neural search speeds up quality assurance: users can extract answers to simple questions when those answers are present in the search database. This reduces the time for understanding, allowing the employee to get a quick answer and continue the workflow without being distracted by a long information search.
Thus, the ability to answer questions will increase the usefulness and value of intelligent enterprise search, making it easier for employees to find what they need. Enterprise QA is still in its infancy, but the technology is evolving rapidly; we will see a wider adoption of various artificial intelligence technologies that can answer questions, find similar documents and do other things that reduce the time to gain knowledge and make it easier for employees to focus on their work.
Innovation is based on knowledge and its connections. They come from the ability to interact with content and each other, extract meaning from those interactions, and create new value. Enterprise search facilitates these links between information stores and is therefore a key driver of innovation.
With advances in artificial intelligence such as neural networks and LLMs, enterprise search is reaching a whole new level of precision and capability.
Jeff Evernham is vice president of product strategy for an enterprise search provider. Sinekva.