Technology Integration: Making the World a Place

During the 19th century, the industrial revolution started and in the late 20th century, Modern globalization has changed the world that we used to know. So, what is the next revolution?

Indeed, we are in the middle of the technology revolution, but take a moment and look at the world around us. Despite the unimaginable pace of development; famine, oppression, and war are spreading in many countries like epidemics and the gap between developed and developing countries is constantly increasing. We have the responsibility to integrate technology to make the world better. This can only be achieved by identifying the main problem we are facing in this era. Corruption!

The world is heading towards full digitalization, which means big data[i] is continuously rising. According to the latest reports, we are producing more than  2.5 quintillion GB of data daily. Therefore, the potential of this data is inconceivable and that will be the key to survive and succeed sooner or later, only handled and interpreted properly.

Here, data analytics and Artificial Intelligence kick in to handle this issue through sophisticated algorithms to process subsets of these data. Analysts can identify complex patterns and predict behaviors ranging from showing recommended videos on YouTube to connecting the dots necessary to pick up on every red flag that hints at corruption.

If only anti-corruption analytics with integrated AI system will be fused with a country’s integrity system, most acts of corruption would be detected and even predicted at the early stages creating a just community. In fact, there are many platforms created to fight corruption and the results were very optimistic proving that data is the game changer in the fight against corruption. In 2017, for example, Spanish researchers Félix López-Iturriaga and Iván Pastor Sanz used neural networks [ii]to build a predictive model for corruption in Spanish provinces. This algorithm identified a previously unseen relationship between particular economic factors, such as rising real estate prices, and corruption cases. The model can identify corruption before it emerges, potentially allowing authorities to take pre-emptive measures.

Another example is The U.S. Department of Agriculture (USDA), which observed a dramatic drop in fraudulent claims for crop insurance after they implemented data analytics approaches. They utilized access to 170 data sources – including several terabytes of policy information, 120 terabytes of weather, satellite and other remotely sensed data, and 1.3 million crop insurance policies across 3,200 counties – to look for atypical patterns among insurance claims, cross-checking them with data from high-solution satellite images and weather records which resulted in billions of dollars in savings.  This method resulted in a subsequent dramatic drop in claims simply because the participants in the program quickly became aware of USDA’s new ability to detect fraud or suspected fraudulent activities.

In order to ensure actions are taken upon the evidences collected through the system, a transparent criterion must be applied. So, there will be no longer a chance of hiding the evidences to keep the corrupted personnel safe. Also, the public will always be aware of the outcome of the system which leads to more acceptance towards allowing access to their data as they trust the system more.

The reasons why we can’t see such system integrated in a governmental system are mainly the ethical guidelines for data protection and the technical challenges. Firstly, an AI system fighting corruption will depend on data from various resources and linking them together even if it doesn’t seem apparent -like human intelligence- in order to be efficient in catching and predicting all corruption acts. However, the main obstacle is having highly skillful experts to develop such sophisticated algorithms. Although having a very powerful computer to process such a huge amount of data can be challenging, there are already powerful computers created. For instance, in 2017 the IBM’s AI system Watson monitors the payments and identifies irregularities which will now serve as a model for Berlin’s project of applying AI to fight corruption and if it succeeds, the project will go national.

Secondly, the strict regulations for data privacy hinder the implementation of such large-scaled system as it is based on collecting all sorts of data about that person or organization. Many experts are debating the magnificent beneficial consequences of integrating AI, yet a large portion of the society find this intruding their privacy. But let’s be honest here, we live in an illusion of personal data privacy. One of the biggest scandals in 2018 was Facebook’s interference in the US election and yet the revenue of the company wasn’t affected even slightly.

Another debatable issue is the worry of AI overpowering humans and going out of control. However, the function here is to collect evidence and publish them not applying the actions or punishments from its own. After all, it is a machine following rules as Alan Turing[iii]said,” A computer would deserve to be called intelligent if it could deceive a human into believing that it was human”.

We have to admit that in the near future Artificial intelligence would change the way people work, communicate, treat diseases and conduct wars, etc. Conventional methods are no longer effective. Therefore, integrating data analytics and Artificial intelligence has become essential to dispel the fog in which the public sector operates. After all, it is our obligation to make the world truly better.


[i] datasets are massive, data is produced frequently and has many sources and formats[8]

[ii] computing systems inspired by the biological neural networks in human brains, it is rather a general learning system without being programmed with any task-specific rules.[9]

[iii] an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. He was highly influential in the development of theoretical computer science, providing a formalization of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer.[10]

Written by Merna Ehab Shehata

Comments are closed.