Anxiety or Anticipation: The Realities of AI for L&D

There’s a lot of hype right now about Artificial Intelligence (AI). Some of it is glowing, e.g. the future of work! We’re seeing people offer courses to help you cope. Some of it is alarmist, whether it’s concerns about accuracy, security, or veracity. Both positions are – somewhat – justified, and also bear some scrutiny. In reality, there’s a sensible path, but it takes some knowledge about how things work to be able to take advantage in useful ways.

To begin, I’m not an AI expert! Instead, I’m more of an AI groupie. That is, I’ve been tracking AI since my undergraduate days (which are quite far away now). Which means I’ve taken AI courses, and followed the approaches. I do have a background that includes computing (including designing and programming award-winning software) and cognitive science (with an actual Ph.D. in cognitive psychology), the two essential contributory fields to AI. I taught in a school of computer science, and was well-acquainted with our AI group. Moreover, I’ve been fortunate to be around at some seminal AI events. For one, I was able to attend a retreat with some of the luminaries discussing the various approaches for a magical few days in the woods outside MIT. For another, I fortuitously was a grad student at the same lab (different group) where the connectionist revolution began, and was privy to the debates that raged. In short, I do have some conceptual background.

A major distinction that’s necessary to understand is at the time of the post-cognitive revolution, most AI (and most cognitive science) was heavily symbolic. That is, it was manipulating whole concepts, using rules. It was semantic – formally logical – focused on meaning. The revolution, however, recognized that we’re not formal beings. A new approach, focusing on situated action (context makes a big difference), was occurring, and sparked a move to another form of computation, sub-symbolic. While there were other approaches (e.g. genetic), the emergent solution came from neural nets, or connectionist models. These models underpin essentially all of what’s now known about machine learning. That includes Large Language Models (LLMs), which are the current flavor of the moment.

These systems don’t operate on rules, but instead constitute (programmatically) large nets of nodes. They work in, essentially, two different ways. For one, they’re given raw data and they are run to try to find any patterns, what’s typically called unsupervised learning. Here, we might see what they find as meaning in whale sounds. Alternatively, and more common, are systems trained on a set of historical data until they can handle similar data in the same way. For instance, we can train them on historical loan data, and then have them evaluate loans.

The current LLMs have been trained on large corpora of various media, for example text or images, typically with a hybrid of supervised and unsupervised: weak supervision. The internet, for instance is a rich source of both text and images. Note that they do one or another, not both. Once trained, they can respond to probes: generating prose or images that represent what they know. For instance, they can take some aspect of learning such as feedback, which has been represented in various forms around the internet, and state it in the style of William Shakespeare (which is also represented in the internet). Or they can generate an image that represents their understanding of a dragonfly in the style of Pablo Picasso (again, both available).  While these are fanciful, there are useful uses, such as asking for a summary of what’s known on spaced learning.

Which is where the problems begin. The sub-symbolic models are probabilistic, in that they generate outputs based upon a statistical interpretation of the data. If, for instance, there are historical biases in the data (which has proven to be the case), they will similarly exhibit such biases. For language, the LLMs base their response on what sounds like good prose. However, since they’ve no notion of ‘truth’, what they say can be something that sounds good, but isn’t accurate; they proverbially ‘hallucinate’.

Overall, there are problems with both systems. The symbolic models were based upon formal models of domains. Which worked well in well-structured domains, such as ones we’ve made up, like mathematics and programming. For instance, there are elegant AI tutors in such domains. However, they’re very dependent on the quality of the analysis of the domain. There’s a running joke in symbolic AI that once it’s understood, it’s no longer AI, it’s just good programming. Connectionist models, on the other hand, are very sensitive to the quality of the data base their trained on. There’s a belief that they do their thing so well, they’re intelligent. But they have no ability to move gracefully beyond what’s known. So, what’s a person to do?

There are separate approaches for each, and then a combined approach. For symbolic models, if you can create an accurate description of the domain, and create rules, they can make inferences in places where you haven’t had to explicitly specify the parameters. The good news is they don't fatigue and aren’t prone to random errors. If you can build a good one, it can do the job reliably and repeatedly. They can also parse numbers and text and do categorizations. The field of Text Analytics is a growing area where people can apply rules to address content in useful ways, such as supporting search, finding sentiments, and more. For instance, you can train a chatbot to respond to queries about a suite of documents, serving as a first level of response for HR. 

For machine learning, you need a good data base to train them on. If you can find an unbiased source of data, you can create good output. One of the problems of the language models, however, is that they take such a large quantity of input to generate good prose that it’s hard to find a vetted source. One approach that’s proved useful is to have them not as a valid source, but as a thinking ‘partner’. That is, you use them to generate ideas, but you scrutinize the output. They can generate questions from text, but some won’t be good. What they can’t do, however, in addition to being trusted to be correct, is to do anything having to do with human emotion. They shouldn’t be trusted to come up with a good hook alone, and they definitely can’t create meaningful practice.

Using symbolic and sub-symbolic models is an area of research, and some results are being found, but they tend to be in focused domains to meet real needs. There are ethical issues about how the data is used (c.f the current writer’s strike about how AI is used to generate TV and movie scripts), the energy consumed by the necessarily large systems to deliver machine learning, and more. Those aren’t addressed here, but will play a role in ultimate decisions about how AI is used.

There are powerful opportunities, and considerable risks. In general, the suggestion is to use AI as a partner, not as a replacement. That’s a form of combination, human and computer, which actually makes a lot of sense. What our brains do well is look for patterns and meaning. What computers do well is deal with large and arbitrary data, without knowing what any of it means (but able to pick out correlations). Together, the whole is greater than the sum of the parts. For now and the foreseeable future, it will make sense to have AI such as LLMs as a partner in production, but not responsible for anything, and certainly not as a sole provider of any information.