… this is the headline of a post shared by the Facebook page “Philosophy matters”. The post has likes and comments of people who sympathize with philosophy, or have an education in philosophy. I read the linked article, and posted a comment expressing my concerns about the mismatch between the headline and the content of the article. The latter is mostly about the usage of technology in education, while the first focuses only on philosophy graduates. Later I realized that the word “expert” in the headline is crucial for both. Philosophers would learn how to play the role of an expert in a corporate environment, delivering assessments on demand. That’s what the headline suggests. At the same time, I would add, philosophers would be aware and make aware that expertise is not the whole story on the journey called education.
The article tries to summarize the “Education Innovation Conference” 2018 in Bangalore, India . During a discussion, the host, Mr Pratik Dattani, Managing Director at the consulting firm “Economic Policy Group” mentions that companies need philosophy graduates as “Chief Ethics Officer” roles….
to look into [Artificial Intelligence]-related outputs through a human lens
This statement came out of a discussion with members from tech firms. Since this was a conference on education and it’s use of technology, the underlying implication seem to be that the use of Artificial Intelligence and other technology in Education requires close observation.
Yet, from the article it seems the conference participants had no doubt hat the education sector has to catch up with usage of latest technology: While financial and health sector understood the need for using technology to save costs and create new business models (broadly coined as “Digitalization”), the educational institutions have not caught up sufficiently with employing those technologies for education (and education administration). In particular:
“AI has got the approval of and is embraced by many major players in the financial services and healthcare sectors. It would play a pivotal role in course designs and formal curriculum as well”.“The audience which consisted of educators, technologists, government officials from education and employment sectors and representatives of NGOs said that schools and colleges should be receptive in embracing new technologies.”
The whole audience said that schools and colleges should embrace new technologies!? Well, just as a side note: Even in the finance sector there is/was a debate to which degree artificial intelligence should replace the role of bankers when advising customers how to invest their money. In 2016, Gina Miller (who is now publicly known for her attempts to prevent Brexit, and is now warning for the consequences if both sides don’t get reasonable and avoid a no-deal situation with the EU) was arguing against “FinTech” companies who aim to replace advisors. Instead, she suggests to keep human advisors and use the technology for “grunt work” which helps to lower the price while keeping quality of advise. In this case, technology is deployed around and not instead of the human interaction:
“There is a regulatory issue here: fiduciary duty. Who is liable if a product has been deemed to be mis-sold? Is it the person who has designed the backoffice portfolio? Is it the company who has designed the algorithms? Or is it the human person who is using that element of advise?”
“[We should] accept that technology will not give us all the [..] answers and will not repair the damage that we have done to our industry over the past in terms of not delivering better outcomes for clients.”
Similar kind of questions about liability, trust, and finding out if a recommendation is justifiable is applicable when platforms suggest career paths from “data insights”, where the data is a student’s digital trace during a course.
“Today’s technologies make it possible to automate grading and create individualised learning paths with AI tutors. AI will make ‘Trial & Error’ less intimidating for the student, making the learning much better,” Mr Iyer – CTO of Indian School of Business – said.
There is surely potential in more individualized and motivating learning paths, and technology can help here. But like for financial advisors, there are tasks where doubt is justified when adopting new technology. Current recommendation systems, for example in YouTube, are directing users to extremist content. These biases create doubt on the degree of trust we should give to recommendation systems, and that there is a self-regulation in the tech industry.
One participant of the Education Innovation Conference, Mr Shivaam Sharma, Founder of Trans Neuron Technologies (who offer a collaboration platform “iTrack” that brings students and industry closer together) has a multi-phase vision of “meaningful” education… from play to employment:
“Unlike boring theory and practical classes, materials should be made available to students to play around with at the beginning phase. The second phase should be to find out the right interest in each individual after which innovation should be encouraged. When they reach Grade 11, students should be linked to companies working in their areas of interest build the right career. This will make education completely meaningful,”
What annoyed me here was the self-confidence of how education will become “completely meaningful”. Admitted: One goal of education is fitting into a role in society – which means in many cases within a company – and making a contribution there. Assuming that this is the only purpose of education would make education synonymous with upskilling. Creating Chief Ethics Officer or similar roles in a company whose job it is to assess output from technology is a job option for philosophers. OK. Advances in technology more and more cross the border of what societies used to accept and shakes many notions. A philosopher is perceived/demanded to satisfy the need of users to have a peaceful mind when using technology. Users wants to be able to say: “Philosophers are spot-checking the recommendations based on my values, so I can trust this technology.” There is a parallel with certificates for organic / fair trade food. Customers buy the product because of these certificates. They trust that those who grow the crop comply to a certain way that respects their values. Philosophers can give this trust to users on the level of ethics, while food certificates gives the trust on the level of farming. That’s why the headline of the article contains the word “expert”. It’s not that society demands aporetic thoughts from philosophers, but in this conference, tech companies need experts in Ethics to raise questions that might have been missed otherwise, and that might even train AI to steer a particular behavior that complies to a set of values.
One way to approach this challenge is to leave some doubts behand, embrace it, and say: “OK. At least I know what you expect.” A philosopher might know that the role reduces her to a provider of expertise, even though she knows that she is not to a full extend able to give peace of mind to the users, based on her work ethos as Philosopher. For example, ethical questions have a lot to do with individual choice, that cannot be decided by experts alone. But she can utilize this role to make her choices and aim that the existence of an ethics department does not prevent others to ask questions that do not belong to their expertise but to the broader area of what it means to exist as an individual who got assigned a particular role. A philosopher might even be able to provoke this kind of interactions with other departments, in particular the department where the research / delivery happens. If we take one example where people are able to bring in their views as an individual into the narrow scope of an expert role: There is news coverage about a Google engineer who worked on Search optimizations. After he became aware that his company collaborates with China on a censored search engine, he resigned. One can ask if resigning is a solution and if it would not be better to organize resistance from inside. In any case this engineer did not resign because of his expertise in search optimizations, but because of the way those are utilized by the company and his thoughts that helping governments with censoring is not the right thing to do.
Another way to approach the challenge of this demand: There are experts for everything. But should there be experts for ethics? Externalizing ethical aspects to a dedicated department might decrease the institutional immunity against top-down decisions. It lowers the demand of individuals (on all levels, from manager to staff member) to be responsible for the work they create and the decisions they take. The safety-net should be as broad as possible, and this is why civil society (citizenship within a company, state, and – if we can even scale it that far – within the world) is something we might want even more than philosophy officers who act as philosophical police towards other workers.
Within our roles as tame users and job-immersed employees, there are opportunities to enrich these roles with aspects that have something to do with who we are as individuals and that allow us to influence the course of the company, the community, the country, the world. This will even be the case when “professional philosophers” are in high demand to audit the output and usage of machine learning algorithms. In fact, it applies to the philosopher experts also. Education includes gaining experience on how to navigate between the narrow scope of an expert role and the desires/traits/thoughts of one’s personality, and it includes the training how to collaborate with other persons, who are recognized as experts, but also seen as individuals