From Google translate:
"Data quality model for Open Data portals
In my experience, many Open Data portals contain datasets with serious errors. These errors are introduced when producing the data in each department (environment, economy, etc.) and there is no direct control. However, this is the most important problem to solve, because if the data are not of a minimum quality they are not reusable, no matter how sophisticated the technologies with which the data are served (API REST, Linked Data, etc.) .
It is a very difficult problem to solve, since there is no way to force the departments to produce quality data. However, one thing that could be done, is to define a data quality model with levels of this style: