Unlocking AI's Deep Domain Expertise with Knowledge Elicitation
The vast universe of generative AI and Large Language Models (LLMs) promises groundbreaking capabilities, yet one persistent question remains: How can we guide AI to possess deep, domain-specific expertise? The answer lies in revisiting techniques from AI’s past.
The Revival of Knowledge Elicitation
Knowledge elicitation isn’t a novelty. Rooted in the age of rule-based expert systems, it served as a bridge between human intellect and machine capability. Today, it finds renewed relevance as AI seeks to assimilate profound yet implicit human expertise. According to Forbes, this methodology is pivotal to transforming LLMs into repositories of best practices.
Blueprint for Domain Expert LLMs
Consider transforming an LLM to excel in a specific field, such as medicine or law. The process starts with amassing relevant documents, which are then fed into AI systems using techniques like retrieval-augmented generation (RAG). However, herein lies a challenge: not all expert knowledge is documented. The essence of true expertise often resides in the collective experiences and nuanced intuitions of industry veterans.
Knowledge Elicitation in Practice
Enter the practice of knowledge elicitation. A methodical engagement with experts can surface undisclosed rules of thumb and trade secrets. From interviewing professionals to parsing verbal protocols, the aim is to capture tacit knowledge and bring it into the AI’s fold. For instance, Lance Eliot illustrates a stock trader’s expertise being codified into LLMs, thereby expanding the AI’s repertoire with niche strategies.
Advancing with Synthetic Experts
The concept of synthetic experts emerges as AI models learn from human counterparts. By simulating domain mastery, AI can approximate the advisory role of a consultant, albeit with limitations. While artificial general intelligence remains a future aspiration, the strategic application of knowledge elicitation today can forge a foundation for competent, narrow AI experts.
Bridging the Gap: Narrow vs. General AI
The debate over narrow and general AI continues to shape the landscape. While some argue that LLMs already manifest elements of general intelligence, others maintain that true expertise necessitates artificial general intelligence. In either scenario, the integration of human-devised practices into AI frameworks promises to enrich the domain-specific capabilities of generative models.
In the words of Elbert Hubbard, focusing on quality work today sets the stage for excellence tomorrow. Embedding human knowledge into AI not only democratizes expertise but also elevates LLMs to new heights of functionality and relevance.