When Amazon first debuted same-day delivery service in Boston, it seemed a promising alternative to poorly-stocked, overpriced, or low-quality supermarkets in neighborhoods like Roxbury. But the service didn’t extend there, despite delivering to residents on all sides of Roxbury.
copyright by www.pbs.org
Amazon was assailed for overlooking the comparatively lower-income Boston neighborhood, and the company, in its defense, said customer data and delivery logistics played into the decision. Amazon didn’t overlook just Roxbury, either. According to a Bloomberg report , in Chicago, New York, Boston, Atlanta, and other cities, black residents were half as likely to live in same-day delivery areas as white residents, despite paying the same $99 membership fee.
Biased Data – Unfair Delivery
Amazon was criticized in 2016 for offering different services to lower-income neighborhoods despite charging the same membership rate. Today, data collected on individuals, when combined with artificially intelligent systems, are used for rolling out new services, populating newsfeeds, targeting advertisements, determining healthcare treatment plans, even levying court rulings. Yet most of us don’t really know how these tools work or how they arrive at their decisions, and neither do we know how to interpret or validate many of the algorithms that power these systems. Without that knowledge, vast swaths of our economy could become black boxes, unable to be scrutinized for legality or fairness.
Data is needed as much as regulation on data
It was an idea that was on the mind of industry experts, researchers, psychologists, lawyers, and activists last month at AI Now 2017, a workshop hosted by the AI Now Initiative . Speakers at the conference not only highlighted challenges that AI presents, but they also discussed the ways in which effective governance could alleviate these concerns. Vanita Gupta, Nicole Wong, Terrell McSweeny, and Julie Brill discussed issues of governance last month at AI Now 2017 in Cambridge, Massachusetts. “There is no possible way to have some omnibus AI law,” says Ryan Calo, a professor of law and co-director of the Tech Policy Lab at the University of Washington. “But rather we want to look at the ways in which human experience is being reshaped and start to ask what law and policy assumptions are broken.”
Say over our data
Collecting enormous swaths of data can speed the development of AI, but it can also compromise people’s privacy. People are not always aware of how their data is being collected and used or who owns it, and given the pace at which AI is progressing, companies themselves don’t necessarily know how they’ll be using the data in the future. That can lead to companies and governments requesting overly broad consent from consumers and citizens. “How am I supposed to give you notice to get your consent when I don’t know what I want your data for?” Nicole Wong, a deputy White House chief technology officer (CTO) under President Obama, said at AI Now. Yet without policy to regulate AI, companies are free to use data as they see fit. They could sell that data, nudge consumers towards products or services they may not need, or exclude certain segments of the population from services that would not benefit the company. In 2014, Facebook conducted experiments to manipulate user mood by altering the tone of their newsfeeds, raising alarm around the internet. Certain car insurance companies, ProPublica has reported, have charged an average of 30% more for car premiums in Zip codes with higher concentrations of minorities than in whiter neighborhoods with similar accident costs. […]
read more – copyright by www.pbs.org
When Amazon first debuted same-day delivery service in Boston, it seemed a promising alternative to poorly-stocked, overpriced, or low-quality supermarkets in neighborhoods like Roxbury. But the service didn’t extend there, despite delivering to residents on all sides of Roxbury.
copyright by www.pbs.org
Amazon was assailed for overlooking the comparatively lower-income Boston neighborhood, and the company, in its defense, said customer data and delivery logistics played into the decision. Amazon didn’t overlook just Roxbury, either. According to a Bloomberg report , in Chicago, New York, Boston, Atlanta, and other cities, black residents were half as likely to live in same-day delivery areas as white residents, despite paying the same $99 membership fee.
Biased Data – Unfair Delivery
Amazon was criticized in 2016 for offering different services to lower-income neighborhoods despite charging the same membership rate. Today, data collected on individuals, when combined with artificially intelligent systems, are used for rolling out new services, populating newsfeeds, targeting advertisements, determining healthcare treatment plans, even levying court rulings. Yet most of us don’t really know how these tools work or how they arrive at their decisions, and neither do we know how to interpret or validate many of the algorithms that power these systems. Without that knowledge, vast swaths of our economy could become black boxes, unable to be scrutinized for legality or fairness.
Data is needed as much as regulation on data
It was an idea that was on the mind of industry experts, researchers, psychologists, lawyers, and activists last month at AI Now 2017, a workshop hosted by the AI Now Initiative . Speakers at the conference not only highlighted challenges that AI presents, but they also discussed the ways in which effective governance could alleviate these concerns. Vanita Gupta, Nicole Wong, Terrell McSweeny, and Julie Brill discussed issues of governance last month at AI Now 2017 in Cambridge, Massachusetts. “There is no possible way to have some omnibus AI law,” says Ryan Calo, a professor of law and co-director of the Tech Policy Lab at the University of Washington. “But rather we want to look at the ways in which human experience is being reshaped and start to ask what law and policy assumptions are broken.”
Say over our data
Collecting enormous swaths of data can speed the development of AI, but it can also compromise people’s privacy. People are not always aware of how their data is being collected and used or who owns it, and given the pace at which AI is progressing, companies themselves don’t necessarily know how they’ll be using the data in the future. That can lead to companies and governments requesting overly broad consent from consumers and citizens. “How am I supposed to give you notice to get your consent when I don’t know what I want your data for?” Nicole Wong, a deputy White House chief technology officer (CTO) under President Obama, said at AI Now. Yet without policy to regulate AI, companies are free to use data as they see fit. They could sell that data, nudge consumers towards products or services they may not need, or exclude certain segments of the population from services that would not benefit the company. In 2014, Facebook conducted experiments to manipulate user mood by altering the tone of their newsfeeds, raising alarm around the internet. Certain car insurance companies, ProPublica has reported, have charged an average of 30% more for car premiums in Zip codes with higher concentrations of minorities than in whiter neighborhoods with similar accident costs. […]
read more – copyright by www.pbs.org
Share this: