The Business Problem
The task for the Telelink case is to obtain the complete set of genome traces found in a single food sample and ALL organisms that should not be found in the food sample. The business needs a solution to this DNA Sequence identification case for improved quality control to be utilized in supply chains supervision and health care and protection.
The Research Problem
The important problem specifics are that most of the above mentioned organisms (pathogens, bugs, human) are closely related, hence substantial parts of their genomes are almost or even completely identical. The participants may use this paper as a starting reference for their work. We recommend visiting http://www.metagenomics.wiki/ . You will find tools, that might help you develop your idea.
The sample taken was statistically representative, i.e. the ratios between the numbers of genome sequence reads from different organisms are the same as the ratios between the amounts of the respective meat types used to produce the sausage.
All the provided files are in the FASTA format.
Full genomеs data for:
Cockroach (Blatella Germanica) – GCA_000762945.2_Bger_2.0_genomic.fna.gz
Cow (Bos taurus) – GCF_000003055.6_Bos_taurus_UMD_3.1.1_genomic.fna.gz
Sheep (Ovis aries) – GCF_000298735.2_Oar_v4.0_genomic.fna.gz
Pig (Sscrofa) – GCF_000003025.6_Sscrofa11.1_genomic.fna.gz
Soybean (Glycine max) – GCF_000004515.4_Glycine_max_v2.0_genomic.fna.gz
Escherichia coli (ASM) – GCF_000005845.2_ASM584v2_genomic.fna.gz
The files are basically a text file of a fastA type. It has the DNA sequences in it. In order to parse the file, there are several softwares that can be used. It would be a good suggest to read a bit about them, what they do, as a preparation.
They all receive a fastA file, and parse it.
cushaw – http://cushaw.sourceforge.net/homepage.htm#latest
SSAHA2 – http://www.sanger.ac.uk/science/tools/ssaha2-0
and BLAT – https://genome.ucsc.edu/goldenpath/help/blatSpec.html
The iGEM expert for the Datathon
Kristian Nikolov, Case Mentor, is an IT Specialist at the BioInfoTech Lab in Sofia Tech Park, a Computer systems and technologies student at TU-Sofia, and a Digital Marketing expert.
Expected Output and Paper
A requirement of the case is the reads of the sequences assigned to specific genomes to be proved with higher than 90% probability. The following deliverables are recommended:
- Algorithms and workflows for the mapping of the provided sequence reads (the case2.fasta file) of DNA from sausage meat against reference genomes
- Clearly visible results in the form of percentage and number of reads mapped against each individual genome
- The Source code, used environments and libraries utilized for the provided solution
The main focal point for presenting the results from the Datathon from each team, is the written article. It would be considered by the jury and it would show how well the team has done the job.
Considering the short amount of time and resources in the world of Big Data Analysis it is essential to follow a time-tested and many-project-tested methodology CRISP-DM. You could read more at http://www.sv-europe.com/crisp-dm-methodology/
The organizing team has tried to do the most work on phases “1. Business Understanding” “2. Data Understanding”, while it is expected that the teams would focus more on phases 3, 4 and 5 (“Data Preparation”, “Modeling” and “Evaluation”), so that the best solutions should have the best results in phase 5. Evaluation.
Phase “6. Deployment” mostly stays in the hand of the case-study providing companies as we aim at continuation of the process after the event. So stay tuned and follow the updates on the website of the event.
1. Business Understanding
This initial phase focuses on understanding the project objectives and requirements from a business perspective, and then converting this knowledge into a data mining problem definition, and a preliminary plan designed to achieve the objectives. A decision model, especially one built using the Decision Model and Notation standard can be used.
2. Data Understanding
The data understanding phase starts with an initial data collection and proceeds with activities in order to get familiar with the data, to identify data quality problems, to discover first insights into the data, or to detect interesting subsets to form hypotheses for hidden information.
3. Data Preparation
The data preparation phase covers all activities to construct the final dataset (data that will be fed into the modeling tool(s)) from the initial raw data. Data preparation tasks are likely to be performed multiple times, and not in any prescribed order. Tasks include table, record, and attribute selection as well as transformation and cleaning of data for modeling tools.
In this phase, various modeling techniques are selected and applied, and their parameters are calibrated to optimal values. Typically, there are several techniques for the same data mining problem type. Some techniques have specific requirements on the form of data. Therefore, stepping back to the data preparation phase is often needed.
At this stage in the project you have built a model (or models) that appears to have high quality, from a data analysis perspective. Before proceeding to final deployment of the model, it is important to more thoroughly evaluate the model, and review the steps executed to construct the model, to be certain it properly achieves the business objectives. A key objective is to determine if there is some important business issue that has not been sufficiently considered. At the end of this phase, a decision on the use of the data mining results should be reached.
Creation of the model is generally not the end of the project. Even if the purpose of the model is to increase knowledge of the data, the knowledge gained will need to be organized and presented in a way that is useful to the customer. Depending on the requirements, the deployment phase can be as simple as generating a report or as complex as implementing a repeatable data scoring (e.g. segment allocation) or data mining process. In many cases it will be the customer, not the data analyst, who will carry out the deployment steps. Even if the analyst deploys the model it is important for the customer to understand up front the actions which will need to be carried out in order to actually make use of the created models.