Alejandro Amo Hello!
Thank you for visiting this microsite to know a bit more about me.

Who am I?

I would describe myself as an IT guy whose career is currently focused on access, exploitation, quality, normalization, privacy and security of data (well, all things data 😉). Although not being a researcher myself, I have had the opportunity to work for the past 10 years in european R+D+i projects as well as private research and IT solution deelopment initiatives. Some topics I have covered during my career are smart car, electricity and water utilities, banking, citizen science, clinical research and cybersecurity (being cybersecurity the topic I know more in depth).

What stuff do I love to work in?

The short answer would be: Data Quality, FAIR principles and Data Governance.

But let's dig a bit into the details.

Data Quality

Data quality refers to the fitness of data for its intended use. It encompasses various aspects such as accuracy, completeness, consistency, timeliness, and relevance.

If you run a business or perform a research project, you probably realized that the quality of its data can decay over time for a variety of reasons. Data is often entered manually, people can make mistakes, two sources of information are not exactly the same but are mixed together, illegal values appear, traceability becomes impossible... over time, these mistakes can accumulate, leading to inaccurate or inconsistent data. In addition, data can become outdated as business processes change, new technologies are introduced, or regulatory requirements evolve, complicating the things even further.

Without proper care of data quality, organizations risk making decisions based on inaccurate or incomplete data, which can have serious consequences. For example, a business may make a poor strategic decision based on faulty data, or a researcher may draw incorrect conclusions from data that has not been properly validated.

The ultimate goal in such cases is clear: developing a strategy and deploying new technologies and processes that help make your data sources become accurate, consistent, and relevant again. By doing so, you can make more informed decisions and gain a competitive advantage in your field.

FAIR principles

In data engineering, four typical needs of data have become popular as a single concept known as the FAIR data principles (Findable, Accessible, Interoperable and Reusable).

FAIR data principles are a set of guiding principles for making data, well... findable, accessible, interoperable, and reusable as the acronym says😉.

FAIR provides a framework for managing and sharing data in a way that maximizes its potential for reuse and discovery. In order to achive that, we usually focus in making the data fully machine-readable, provinding useful metadata, attaching persistent and unique identifiers, and apply standard vocabularies. In this way, the data can be easily discovered, accessed, and used by humans and machines alike. Projects that aim for attaining FAIR principles can tap into a lot of added value, specially in scientific research where the (re)utilization of datasets is key.

Data governance

Well, you can't work on the previous two without having the third one into accout. Data governance is the overall management of the availability, usability, integrity, security and privacy of the data used in an organization. This is the part in which we focus on the processes, the standards or the policies that we need to put in place, for effectively managing data throughout its lifecycle. The goal of data governance is to ensure that data is managed in a way that meets the needs and goals of the organization while complying with legal and regulatory requirements at the same time.

Contact

Do you want to contact? Here are the best ways to get in touch with me:

Email

LinkedIn

GitHub