The amount of data available is becoming overwhelming for human analysts and impossible to process the volume for useful decision-making. Analysts are bound to the confines of limited time and attention, creating a need to economize data collection, curation, and dissemination (Diakopoulos, 2016). It is for this reason many organizations are moving toward automated decision-making options such as program algorithms (Diakopoulos, 2016). Throughout every industry automated decision-making is taking place that most people are unaware of, such as writing news articles on Facebook using available data for generation (Diakopoulos, 2016). Unfortunately, the automated stories, or decisions, being generated still lack an element of human distinction that is necessary. There is evidence automated decisions are not free from mistakes including costly errors, discrimination, unfair denials of public service or considerations, and even censorship (Diakopoulos, 2016). Despite the efficiency, competitive advantage, and the financial benefits of using automated decision-making, there is still a strong need for human engagement in the process.
There are at least two areas of decision-making that human engagement remains necessary. The first is prioritization, which is the coping mechanism used to handle the volume of information being taken in. Prioritization, by definition, is about discrimination (Diakopoulos, 2016), but a decision-making algorithm is unable to account for all the human consideration necessary.
The second area of decision-making in need of human engagement is in classification: distinguishing class based on key characteristics (Diakopoulos, 2016). The opportunities for bias and mistakes are plentiful in this area of automated decision-making (Diakopoulos, 2016), because often audience is not properly considered and the type of knowledge being used is not scrutinized properly (Sen & Hecht, 2015). Human associations must be made in relational associations to recognize close ties, something automated decision-making still lacks (Herlocker, Konstan, and Riedi, 2000).
References
Diakopoulos, N. (2016). Accountability in Algorithmic Decision Making. Communications of the ACM, 59(2), 56–62. http://doi.org/10.1145/2844110
Herlocker, J. L., Konstan, J. A., & Riedl, J. (2000). Explaining collaborative Filtering recommendations. Proceedings of the ACM Conference on Computer Supported Cooperative Work, pp.241–250.
Sen, S., & Hecht, B. (2015). Turkers, Scholars, “Arafat” and “Peace”: Cultural Communities and Algorithmic Gold Standards – Semantic Scholar. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work and Social Computing, pp. 826–838.
Automated Decision Making