There is a little-noticed talent that’s critical for success in a tech-centric world; it’s up there with being a great programmer, a master strategist, or even an innovative entrepreneur.
It’s being good at explaining stuff.
Explaining why and how something functions has always been a high-value pursuit, essential for leadership. How you explain things frames how you see the world, and the ability to clearly convey your intentions, goals and methods is the stuff of clear mission statements, great speeches, and effective selling. Defining something effectively, in this sense, establishes a kind of ownership of it, and can stir thousands to action.
It’s why Steve Jobs, among many other leaders, would spend months on a mere product presentation. He wasn’t just explaining things; he was giving context to a new way of encountering the world, through his product.
Something like that level of patience and skill is now needed in the engine rooms of business, where cloud computing, artificial intelligence (Ai), and an explosion of data are reshaping how we live, work, and play, even as the rest of the world struggles to understand what’s going on. These new technologies are incredibly powerful: they can deliver us new insights, they make things happen at an accelerated rate, and they touch an increasing number of areas in life.
Putting these technologies into rapid use, then telling people how the technologies worked and why they did what they did, is critical. In fact, it’s already a big part of information technology. Providing fast and accurate answers to questions, easy navigation, and clean and organized web pages, all inherently show an understanding of both user needs and product capabilities.
More important is what practitioners of Ai call “explainability.” That means sorting out what an algorithm did, what data was used, and why certain conclusions were reached. If, say, an algorithm also made business decisions, these decisions need to be annotated and presented effectively.
Explainability helps business leaders understand why a company is doing what they’re doing with Ai. This kind of thing will be even more important as Ai becomes commonplace in legal matters or in regulated activities.
In these cases, it will be incumbent on Ai specialists to show that their data is free of bias and that the outcomes their programs reach are consistent — an interesting challenge for things like deep learning, where there are many, many layers of analysis and different approaches that can affect the outcome. In a conversation with Ai researcher and professor Sir Nigel Shadbolt, he talked of a future need for algorithmic accountants and data accountants: people who worry about the nature and origin of the datasets.
Elsewhere in the corporation, the increased level of business made possible by cloud-based systems means that, both within different departments and with external partners, there will be a growing emphasis on developing well-defined roles and identities so people can move swiftly and with certainty.
Whatever the challenges, there is much to like about the explainability revolution. The potential for Ai to improve our lives is vast; but the more we understand it, the more valuable it will be. For one thing, Ai that is well examined and understood often surfaces data biases that arose among the humans the algorithm was aping (this story of an Ai hiring program that ruled out female engineering candidates is a good example). Departments that can explain themselves to other parts of the company will likely have better outcomes, since they’ll be better understood. In turn, they can help the company explain itself to customers, and vice versa.
Fulfilling that need—to be better understood, on all sides—is a high-value activity, whatever technology is at hand.