AI and Spend Analysis
We are often asked about AI and its applicability to Spendata and to spend analysis.
In spend analysis, AI is commonly applied to two tasks:
- grouping together (“familying”) accounts payable transactions with inconsistently named suppliers (e.g. putting “I.B.M.” and “IBM” under the same name), and
- assigning a commodity (e.g. “IT>hardware>mainframe”) to each transaction, based on other information contained in the transaction.
Armed with this transformed data, analysts can accurately determine “who bought what from whom” – the starting point for any serious look at spending.
Generative AI: Unhelpful
When we talk about AI, we must be careful to distinguish between bespoke AI engines built for specific purposes, and “generative” AI (ChatGPT, etc.) that manufactures content probabilistically and generically. Although generative AI may be helpful when constructing a survey article on spend analysis, it is unhelpful for specific spend analysis tasks like familying and mapping. That’s because its understanding is based on a large language model (LLM) trained on written, grammatical text.
In order to correctly process the shorthand of an item description or of an abbreviated vendor name contained in an accounting transaction, an AI engine must be trained on quite a different language corpus than the King’s English. It’s therefore not surprising that generative AI performs poorly when asked to map commodities.
It is even more unreasonable to expect a generative AI to provide data-aware insights on sourcing and demand management, since it will have had no training to allow it to do so. Nevertheless, it should be possible to construct a bespoke AI that provides simplistic advice along the same lines as that provided by data-aware reports (e.g. “consider using [more/fewer] suppliers.”) It should be noted that at the time such reports were first created (circa 2005), procurement industry analysts became excited and began speculating about “the end of the commodity manager.” That episode was embarrassing and is best forgotten, but there will likely be a similar overreaction if such an AI were to surface, especially since (unlike a report) it will deliver its banalities in complete sentences and well-constructed paragraphs.
Bespoke AI Engines
Within bespoke AI engines, there are basically two approaches: “traditional” AI, which operates using a combination of heuristics and knowledge bases, and “deep learning” AI, in which a neural net model is trained on a corpus of domain-specific data. From a black box perspective, both systems work the same way: provide an input transaction to the system, and an “answer” is returned in the form of a familying or mapping operation.
Both traditional AI and deep learning AI automapping technologies require manual review and adjustment. Traditional AI makes very few errors, since it is matching normalized suppliers to a curated knowledge base, but it tends to map fewer transactions. Deep learning AI makes more errors, including embarrassing high-spend miscategorizations, so manual cleanup is not optional – but it tends to map more transactions. Both approaches net out to a similar overall effort, as is apparent from cost and time estimates from spend analysis vendors using both techniques.
Traditional AI for mapping works well when there is a useful supplier identified in the transaction (rather than, for example, a group purchasing organization), and when the spending is indirect spending (as is the case with most spend cubes). However, if the only useful information in the transaction is an item description, which is sometimes the case with direct spending, then a deep learning AI may be the only way to automap the spend.
AI and Spendata
With respect to familying, Spendata uses traditional AI with extensive heuristics that performs better than any other autofamilying system with which we are aware. For example, Spendata’s autofamilying AI is able to family 95-97% of typical purchasing card transactions. Only minor manual cleanup is required.
With respect to mapping, Spendata uses a traditional AI approach, but with major differences from other vendors. Rather than simply stamping transactions with results, Spendata’s automapping generates human-accessible and editable mapping rules – exactly the same rules as generated with manual mapping – which are then stored and applied to the transactions. This means that it is possible with Spendata to see exactly what the AI decided to do, and to understand why – and to modify its decisions – none of which is possible with a deep learning AI, where the reasons for the mapping are mysterious.
Once Spendata rules are created and vetted, mapping results do not change. On a data refresh, everything is consistent: new data added to the system is mapped the same way as older data, and the older data mappings are invariant. That can’t be said about a deep learning AI, where additional training (or over-training, a serious issue for such systems) can reverse or alter previous decisions without warning. This makes data refreshes significantly more challenging, since end-user confidence in a spend analysis system erodes when mappings mysteriously change.