How to Leverage AI for Social Good, CIO News, ET CIO
As issues such as poverty, inequality and climate change pose a threat around the world, experts believe that advanced data and technologies such as AI and ML can bring about change and help solve these problems. of development.
Given how AI is often viewed by the public, as a supercharged next-gen technology or as a bane, it is important to discuss the reality of AI and data in the social sector, at the same time. Currently, and this challenges are being addressed with the help of AI.
âWe’re trying to look at areas where AI could be, in a way, that can accentuate what’s already going on. So, for example, if we are using GIS (Geographic Information Systems) mapping of, say, pond areas or all irrigation outlets in a pan-state area, then how do you augment those datasets by above, the ground reality data together, that means someone has a device and they actually tell you, the pond is there, the depth is such, the water before the monsoon and after the monsoon, and I’m marking it right, âGaurav Sharma, Senior Advisor – AI, GIZ India said during a speech at an event on using data and AI for social good.
He further explained that this type of future planning is possible with the presence of such precise data and that this forward-looking intelligence can be used to warn people of the weather, for a project with ZeaJet. Gaurav also touched on a project with the Telangana government they are working on that revolves around agricultural value chains and crop diversification.
Speaking about the project, he said: âThe Telangana government that we are working with because they have this concept of an open data portal and they have currently done some kind of pre-assessment in the last two or three years. with some of its own initiatives and we want to add to them that we want to tackle one of the challenges and then act on it.
They are also working on a project to collect data on the Mundari language. âNow Mundari is spoken by a large tribal area in Orissa, Jharkhand, in and around Andhra Pradesh, like 3-4 million people, but there is less text. There are no resources in terms of radio, newspapers or even books. So what we’re trying to do is create little textual datasets and then see if they can be used in areas like education, elementary, science and math education to people who don’t understand. language, âSharma said. added.
Speaking about data and AI and how it can be used in use cases, Professor Jasjit Singh, INSEAD, said data is a much larger space than AI and that it is worth separating the two. âEvery organization needs to be data-driven, right? And what I mean by that, you know you have to ultimately measure progress or measure how things work or even when you do a needs analysis you have to do the measurement, âhe said. . âAll of this involves data. So the data is obviously increasingly being applied on a much larger scale. Now sometimes people take AI / ML in particular approaches. Sometimes they use more traditional statistical approaches, as I mentioned.
He explained how social sector data can be widely used for predictive analytics, and to clarify further, he gave a hypothetical example of a training program with at-risk youth in a low-income rural area and l The organization needs to know which of them will drop out of the program and how can they keep them. He said that in such a case they could move forward in a way that involves looking at past data, in terms of who dropped out or who is at risk, using that as a mechanism without necessarily know the exact relationships, but you’re just making the prediction.
âTypically, if you’re looking for a traditional, statistical, or hypothesis testing approach, you want to have a concrete model of X leads to Y leads to Z and so when explainability is very important, this is where , AI, in particular, deep learning is getting a little tricky, âSingh said of where AI comes in. “So if you just say that person X got credit from a financial inclusion organization but person Y didn’t, and you’re concerned that there is discrimination based on gender,” race, religion, etc., it’s really hard to establish. that in the context of AI, so at a minimum, you will have to think about it very seriously when designing algorithms.
Akanksha Sharma, Global Head ESG – Social Impact, Sustainability & Policy spoke about the lack of quality data and data framework. âIf you’re talking about health and education, data quality is one thing and it’s also about data exclusivity. So we process data at different touchpoints where we don’t have a common pool with reliable access to information this is, I think, the major solution because digital education and the digital divide exist, but how to translate all these solutions at the last mile and implementing them with reliable information, is the major challenge that we have seen.
She gave the example of a telemedicine project they worked on during the pandemic where they had to work on a hybrid model that involved offers of specialized telemedicine services as well as the physical presence of a doctor and what ‘They realized is that they are taking advantage of state or district-level common data pool for services such as health care or even education.
On the data collection front, she stressed that this will not go away tomorrow or the next few years due to the fact that we are struggling with a lot of things at the same time, continuing on this point, she said: âWhen we talk about enabling AI and technology to solve last mile problems, then we realize that one of the main challenges is the partnership between the public sector and the private sector, because the government can play a role. huge role in creating a general framework, bringing everyone together under one roof.
Asked about the criteria for evaluating responsible AI-based solutions, Subhashish Bhadra, Director, Omidyar Network India, said the two aspects would be âShould I be doing AI or not? And if I decide to do AI – “How should I do AI?” “
Speaking about the second aspect, Bhadra said, âI would think of a responsible AI that falls into three buckets, because at the end of the day, because data is the fuel that powers the engine of AI, there are a lot of questions. ethics around the collection and use of data. I think there is a lot of work and knowledge on data protection, but questions like ‘How do you collect this data? “,” Is the person from whom you collect this data aware? “,” Does she have the capacity to withdraw their consent? ‘. It is therefore a pillar of responsible AI that must be examined. “
The second pillar, he said, is how to bridge this information asymmetry with the question: are they all these black box algorithms and can something be done to improve their transparency so that can they be integrated into the wider social construction of good governance?