Commit e1ec1828 authored by taco@waag.org's avatar taco@waag.org
Browse files

corrections

parent 82f83d91
......@@ -49,7 +49,7 @@ The third party lab stores the bio sample in a bio vault (think refrigerator). I
5\. The researcher sends the outcome of the research as a report to the Donor either by normal mail or email or other digital channels.
Now generally speaking the donor is focused on the end result and will happy with the process if they finally receive a report from the researcher. But there are many questions that can be asked for each step described above. What exactly happens with sensitive data during the steps? Is the agreement itself personal data? (1) How does the researcher store contact information? Is the Bio sample tagged with the name of the donor? How is the phenotype information stored? (2) Does the bio sample get destroyed after processing? Does the third party lab keep a copy of the array data? If so what are they going to use it for? (3) On what servers is the research cloud hosted? How is the array data transferred to the research cloud? Is the array data removed from the research cloud after the analysis? What meta data is extracted for the catalog. How is this information aggregated? What happens to the copies of the CNV file and the report? How are they transferred? Who will be able to do predictions based on my biomarkers in the future? (4) And finally, what happens with the contact information of the donor when the research is concluded? (5)
Now generally speaking the donor is focused on the end result and will be happy with the process if they finally receive a report from the researcher. But there are many questions that can be asked for each step described above. What exactly happens with sensitive data during the steps? Is the agreement itself personal data? (1) How does the researcher store contact information? Is the Bio sample tagged with the name of the donor? How is the phenotype information stored? (2) Does the bio sample get destroyed after processing? Does the third party lab keep a copy of the array data? If so what are they going to use it for? (3) On what servers is the research cloud hosted? How is the array data transferred to the research cloud? Is the array data removed from the research cloud after the analysis? What meta data is extracted for the catalog. How is this information aggregated? What happens to the copies of the CNV file and the report? How are they transferred? Who will be able to do predictions based on my biomarkers in the future? (4) And finally, what happens with the contact information of the donor when the research is concluded? (5)
## Genetic data and ownership
......@@ -59,19 +59,19 @@ On the other side of the coin you get half of your DNA from each of your parents
This paradox illustrates something important: the idea of ownership of genetic data can be contested.
We might have to think about it a bit different. One of the candidate terms that came up in discussions during the project was data stewardship, placing more emphasis on the need to take care of it as a shared resourcep. But there was no consensus within the team. Another approach for future research could be to design a mechanism where next of kin are made part of (discussions during) the consent process.
We might have to think about it a bit different. One of the candidate terms that came up in discussions during the project was data stewardship, placing more emphasis on the need to take care of it as a shared resource. But there was no consensus within the team. Another approach for future research could be to design a mechanism where next of kin are made part of (discussions during) the consent process.
We changed our mental model to view 'data ownership' along the following heuristic; The more data says something about you, the more you should have something to say about how it is handled, who gets access to it and under what terms sharing of that data would happen.
## Datacommons for genetic data
We consider what we are trying to accomplish with GeneConsent to be a step in the creation of a Datacommons, collections of data maintained and governed by (local) communities. In Datacommons technical and organisational means are combined to provide value and control to the members of the community, so that they can influence what happens with their data.
We consider GeneConsent to be one of the steps in the creation of a Datacommons, collections of data maintained and governed by (local) communities. In Datacommons technical and organisational means are combined to provide value and control to the members of the community, so that they can influence what happens with their data.
<img src="network.png" alt="drawing" width="400" style="float:right"/>
Right now valuable research data is stored under control of various academic research institutions, commercial consumer business, cloud services and other related parties. But actually using this data for research is legally and practically problematic or even impossible.
Depending on the conditions of consent, legally speaking much of this data can not be reused for other purposes than the original. Commercial companies often address this by reserves the rights in their agreements to do whatever they want with the data, including selling it to the highest bidder. Which means you will be effectively unable to know where your data ends up. Then collected data is saved in unlimited different file formats, on various aggregation levels, levels of sensitivity, research subjects etc. Even if we could, we definitely don't want to have everything stored in one place. but failing some sort of registry, it's nearly impossible to find relevant existing datasets.
Depending on the conditions of consent, legally speaking much of this data can not be reused for other purposes than the original. Commercial companies often address this by reserving the rights in their agreements to do whatever they want with the data, including selling it to the highest bidder. Which means you will be effectively unable to know where your data ends up. Then collected data is saved in unlimited different file formats, on various aggregation levels, levels of sensitivity, research subjects etc. Even if we could, we definitely don't want to have everything stored in one place. but failing some sort of registry, it's nearly impossible to find relevant existing datasets.
What we aim for is to create a distributed network of datasources where all parties involved can responsibly share, create, find and reuse data, knowing that they will stay appropriately informed every step of the way. We see a consent service as one of the components in such a network where responsible sharing of (research) data according to the FAIR principles is enabled. Fair stands for Findability, Accessibility, Interoperability and Re-use. For more information read the [FAIR principles](https://www.go-fair.org/fair-principles). Geneconsent aims to manage consent between individuals and researchers for genomic data.
......@@ -123,7 +123,7 @@ To paraphrase the checklist, it seems wise to view informed consent as a co-crea
Adding the word dynamic gives us 'dynamic-informed' consent as described in [Dynamic-informed consent: A potential solution for ethical dilemmas in population sequencing initiatives](https://www.sciencedirect.com/science/article/pii/S2001037019304969). A dynamic processallows to keep participants informed before, during and after the research is conducted. The linked article is an overview of possible requirements that can help design such a process that are classified in three categories (dynamic permissions, dynamic education and dynamic preferences).
Our research focused on the parts needed to enable 'dynamic permissions' in the consent service. But we always had in the back of our minds this would also enable responsible reuse of existing data, by keeping the subjects in the loop after they give their consent. If a researcher discovers data collected in as part of earlier studies, the mechanism can facilitate a recurring informed consent, before the researcher is provided access to the data. The privacy of that subject remains protected until they give their consent.
Our research focused on the parts needed to enable 'dynamic permissions' in the consent service. But we always had in the back of our minds this would also enable responsible reuse of existing data, by keeping the subjects in the loop after they give their consent. If a researcher discovers data collected in as part of earlier studies, the mechanism can facilitate a recurring informed consent, before the researcher is provided access to the data. The identity of a subject remains unknown to the researcher at least until they give their consent.
<img src="dynamic_informed_consent.png" alt="drawing" width="500" style="display:block;margin-left: auto;margin-right:auto;"/>
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment