models need to be validated so that they do what they are said to be doing
models need to be paired with human action 
iterative development with constant feedback from ``end-users'' (citizens)
typically assignments to dev teams come from management of a work process, who indirectly represent interests of citizens
    regardless, dev team should seek to acquire feedback from citizens
also once a system has been taken into production continuous feedback should be gathered
review by CPA:
    proportionality
    mitigating risks
monitoring: how to ensure a system operates within the defined boundaries: technical, ethical, end-user value
    requires building in internal control measures
push for AI can also be to perform a legal duty (like checking for illegal containers on canal walls)
separation between surveillance and enforcement -- former is done with AI, latter is human action
bias in enforcement is not likely assuming surveillance has 100% coverage
they call it ``responsible collection of data''
    e.g. how to collect data for multiple purposes at once
    where should storage happen -- edge or cloud
