The hidden knowledge in the data (aka we are still using AI wrong)
In every use case I saw from peers using AI, we use LLMs to process the obvious layer of the data we provide. We feed LLMs our data with a single intent in mind — get an answer, automate a task, summarize something. . But we tend to forget that LLMs (just like humans) can extract far more than what we explicitly tell them to look at.
Let’s say you work with Dust, and you plug an AI assistant into one or several Slack channels with all the discussions that happened during the build of a product. The AI assistant will be able to answer all questions that were previously answered by a human in the Slack channel. Nice job !
Actually you can take this further, and uncover one layer of (not so underlying) data, by asking
Based on all the questions and answers in this channel, can you write an FAQ about the product ?
Oh wow it works. Because by aggregating all the questions and discussions that happened when building the product, you end up with enough information about the product to build an FAQ. You are satisfied with your prompt, and realized you just saved a few hours documenting your product.
But you could take it a whole different way:
I want you to act as a CPO who is coaching me to become an elite Product Manager leading the banana-peeler feature in #ftr-banana-peeler channel. You don't hesitate to give straight to the point hard (yet constructive) feedback and to highlight inefficiencies, because you believe this will what will make your team member improve. I am well aware of this way of giving feedback and welcome it as I am here to grow as fast as possible.
Wait what ? 😱
What if we went directly in a Black Mirror episode ?
Based on the messages in the Slack channel can you assess [my] morale since January ? In particular can you leverage subtle patterns for example how fast he replies, how fast he gets answers, how many times he is being pinged and how much he needs to context switch
Not the best prompt but the answer was still creepily accurate.
And you could go full Black Mirror and start imagining an automation that periodically estimates the mood of team members and alerts the lead when there are signs of burnout. The postulate of BM episodes are usually some unexisting technology, but right now what is already in our hands is super scary, even if we don't think of the use cases.
Last non-creepy prompt “pour la route”:
"Based on all the messages from the Slack channel what underlying processes can you infer at [my company] ? An underlying process is for example an unspoken rule that some people or teams need to validate some specific decisions, but you can see it in the interactions"