The AI Impostor
Photographer: Cyrille Duverne - https://stocksnap.io/photo/70L5UYL0FO

The AI Impostor

A new phenomenon seems to have emerged in recent months, if not in the past two or three years, where some people, both in industry and academia, present themselves as artificial intelligence (AI) experts without having the required competencies. This is, in light of recent success of artificial neural networks and some other machine-learning algorithms - that we self-indulgently call AI - an opportunistic tactic to jump on a train that most of us think is not only carrying the solution to all of our problems (this may be, of course, a bit exaggerated), but can also lead us to a good job, and/or to money and fame.  

Let's be clear about this: We are not talking about the young engineering and computer science school graduates, who are starting their career and are naturally fascinated by new technologies and hence emphasize their knowledge of the field, to get a chance to start. There is no shame in pointing to your undergraduate courses and small projects to demonstrate you have the basis to grow into an AI expert. This is perfectly all right, as long as you stay committed to facts and pronounce your knowledge and abilities instead of aggrandizing them.

The AI impostors, mainly academicians but also some industry employees, are people who change their fields overnight and suddenly claim to be an AI expert without having the depth and magnitude of necessary knowledge and experience. AI impostors are basically scientific chameleons. 

As for professors, we tend to run after any whistle that promises grants, a trait that is, unfortunately, reinforced by the publish-or-perish credo. As an engineer, I would frantically search to find a sentence in my papers from twenty years ago, containing the words 'personality' and 'self' to prove I know about psychology - if there was an opportunity to get money for psychology research. It is silly, childish and certainly indecorous for the scientific grandeur. The maxim of objectivity and unwavering dedication to facts in science does not leave any place for masquerading. But, as we academicians happen to be part of the species of Homo Sapiens, we do indeed exhibit all traits of its other members; having a Ph.D. doesn't seem to vaccinate us against utter imprudence and obvious greed.

Not that this makes sense, but we could, just for sake of entertainment, devise a Turing Test to recognize AI impostors. Such Turing Impostor Test should distinguish a real AI expert from a hoaxer. As my working environment is a postsecondary institution, I may be able to contribute to the academic version of such a test by proposing some questions for the Turing judge (who has to be a real AI expert, by the way, say a colleague like Geoff Hinton). The Turing Impostor Test does not require separate rooms and has to be face-to-face (which understandably would freak out all impostors).

So here we go with some questions to debunk AI impostors: 

When did you get your Ph.D.? If this happened less than ten years ago, you can hardly call yourself an expert in anything unless you can back it up with 10,000 citations (minus self-references) for your fantastic algorithm you published two years ago.

What was the topic of your Ph.D. thesis? If your field of research, reflected in the title of the thesis and its content, is not AI, you can hardly call yourself an AI expert. Rudimentary alignments of some of the pages of your thesis with some AI methods do not count. 

How many publications do you have in the AI field? Here the crafty nature of professors can potentially fool the judge but not if the latter is of Hintonian caliber. Using some notions of probability theory, a little statistics here, and some toothless pattern recognition there may not even qualify as AI knowledge let alone at expert level. 

How old are your AI publications? Related to the question of Ph.D. age, this question aims at the only thing that matters in the science. If you have publications in AI (of either theoretic, algorithmic or applied nature), then you may as well claim competency (well, your colleagues would actually recognize you anyway if that's the case). Of course, seniority and track record do count. Associating knowledge and wisdom with the white-bearded professor may appear superficial but it has, resting on a body of decently cited literature, some validity. 

The AI impostor, naturally, would never expose himself to such a test. He generally operates in the small and cozy environment of his institution where he may manage to impress some students and, through his relationships, the administration of his university. AI impostors use cunning marketing techniques, chose fancy titles for their papers and (local) talks, and embed colorful but unintelligible graphics in their publications and presentations. They are mainly after resources, and they only need to deceive a small number of people at their institutions to achieve their goal. 

The AI impostor is not just a silly figure with a simplistic worldview, which is quite parochial and naive. Beyond the ridiculousness of their actions, AI impostors’ actions may seriously damage their home institutions. A university, faculty or department that puts forward a swindler to represent them as an AI expert, jeopardizes its reputation, a strategic risk that should not be taken lightly. 


H.R. Tizhoosh

Kimia Lab, Mayo Clinic

4y

Maybe the AI imposters do not know they are imposters; maybe they just suffer from Dunning-Kruger effect: https://www.youtube.com/watch?v=y50i1bI2uN4

Martin Orji

MNSE, MIEEE - Director Engineering Services at Patmo Engineering & General Services Ltd.

4y

I completely agree with you - It is common practice for certain academics to deem themselves experts in a field most times because they are the only ones in a department with more extensive knowledge on that field or sometimes because they are the highest-ranking person.  I have seen this happen in the field of Big Data as well.  It is my opinion that theoretical knowledge of a field is not sufficient for claim to be an expert especially in AI and Big Data. Practical  Experience on real life projects that apply such techniques should be a criterion for a suggested Turing test. Here my emphasis is on collaboration between the institution and industry. After all of what use is science if it is not applied ??

David Prokop

FDA Funded Ai Researcher • Principal Investigator • Award Winning Ai Product Developer • 35 Patents, Microsoft Researcher • Founder TruMedicines • Ai Hardware Lab at University of Washington

4y

Your treatise states only PhD's can be Ai experts. I disagree. Dismissing those of us with 20 years of industry R&D experience is narcissistic. 

P. Alison Paprica

Part-time researcher, advisor, author, and trainer; full-time stubborn optimist

4y

Hamid, I agree that "AI Imposters" as you describe them are an issue, but don't know that I'd have such tough exclusion criteria.  Could be because my chemistry training makes it impossible for me to imagine 10,000 citations, but it's also that I want to make it clear that there is room for many when it comes to bringing AI into healthcare.  Of course, they won't all be experts.  Vector-recognized Master's programs (link below) distinguish between 'core technical AI' programs which are generally STEM and have the aim of producing graduates who can develop and deploy models, and 'complementary AI programs' where people learn enough about AI methods and applications to be really effective on a team that also has technical team members. Of course, people shouldn't over-promise or make false claims, but we do want many diverse contributors, even if they aren't all technical AI experts. https://vectorinstitute.ai/wp-content/uploads/2018/10/guidance-for-ai-related-masters-programs.1.pdf

Like
Reply
Emmanuel O.

Machine Learning Engineer

5y

This is a really interesting article. I would love to know what motivated you to write this. Thanks

Like
Reply

To view or add a comment, sign in

Insights from the community

Explore topics