2 Comments
User's avatar
Eric Blaettler's avatar

Thought-provoking article—thanks for unpacking the practical and philosophical dimensions of AI explainability. I recently explored related territory in my Medium piece “Reading Between the Lines: What Your Brain Reveals About the Future of AI,” where I argue that genuine explainability demands more than just statistical transparency; it requires AI to tap into the human processes of meaning-making. When we ground explanations in the mechanisms by which our brains create understanding—semiosis, interpretation, and cultural context—AI transitions from being merely ‘transparent’ to truly relatable and trustworthy. The semiotic web vision pushes this further, embedding meaning and empathy into AI’s very architecture. Looking forward to seeing how the conversation around meaning-centric AI evolves from here!

Expand full comment
kalim's avatar

I thoroughly enjoyed reading this interview. Hoping for more of such insights every month.

Expand full comment