Government would like to get consent from citizens to do things with algorithm but it is not clear how to do that.

Respondent recognizes tension between benefits algorithms can bring and potential harms and need to make sure algorithmic intervention is proportional.

System they are working on makes automated notifications for enforcers. Always includes a human in the loop.

"So such a report first ends up with an enforcer who looks at it before really enforced and the system itself is not allowed to make decisions, so our system only gives a pass and the enforcer must then hopefully head it in."

They are blurring people in camera footage. They want feedback from citizens about of that is good enough. They are considering organizing panels or expert session but no direct feedback loop from citizens to development is currently in place.

They are now planning a blurring service that is different from commercial offerings because it will be more inclusive: across ages and skin colors.

Despite having a strong team there are of course always limits to what they can achieve. Compared to big tech their resources are limited.

They are also investigating the acquisition of a scan car that can process data on the edge in which case data would not have to be stored centrally anymore, obviating the need for blurring entirely.

Because of all the compliance requirements imposed on government -- security, privacy -- they can spend less of their budget on pure technical development, compared to commercial companies.

They will outsource data collection to an external party. But data processing they will keep internal so that they have more control and are better able to respond to legislature changes.

Innovation projects mostly happen upon request from "internal clients". Those execute policy set by city government. Regardless, respondent would like to go back to alderperson for go-ahead on algorithmic system that executes a policy, even if it has been found to be compliant with privacy and security frameworks.

Direct participation is a challenge. Respondent mentions recent case of work participation council negatively advising on algorithmic system, but responding to a question that is very different from the one that was asked of them.

By analogy, respondent is worried that if they go to citizens to ask about the blurring algorithm, they will get an answer to the question if they want to be scanned in public at all.

People who are inclined to respond tend to hold extreme views, either strongly in favor of AI or against.

Advice from citizen participation does have a lot of weight so a lot is at stake. Need to make sure people respond to the actual question.

Blurring algorithm has intersection over union (IOU) score of 0.8. But the number was talked about internally as accuracy of 80%. Example of how hard it is to communicate about technical matters. Also accuracy of 95% should be differentiated for how close people are to the camera.

In any case respondent wants to have an independent audit of blurring service because trust in government is so low.

Another challenge they want to get opinion of citizens on is multiple uses of data that was collected once for a single purpose. The issue of "purpose limitation". Legally this is currently not allowed. But consquence is that for each purpose a new camera car would have to be fielded.

The fear is that in the wrong hands, being able to add purposes to data collection after the fact makes abuse very easy. 

One constraint is that their system will only be trained on objects, not people.

Two questions follow from this: one, how can the law be interpreted; and two, what do we want as a society? Respondent is more interested in the latter.