Big data refers to data sets that are too large or complex to be dealt with by traditional data-processing application software. Data with many fields (rows) offer greater statistical power, while data with higher complexity (more attributes or columns) may lead to a higher false discovery rate.[2] Big data analysis challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating, information privacy, and data source. Big data was originally associated with three key concepts: volume, variety, and velocity.[3] The analysis of big data presents challenges in sampling, and thus previously allowing for only observations and sampling. Thus a fourth concept, veracity, refers to the quality or insightfulness of the data. Without sufficient investment in expertise for big data veracity, then the volume and variety of data can produce costs and risks that exceed an organization's capacity to create and capture value from big data.[4] Current usage of the term big data tends to refer to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from big data, and seldom to a particular size of data set. "There is little doubt that the quantities of data now available are indeed large, but that's not the most relevant characteristic of this new data ecosystem."[5] Analysis of data sets can find new correlations to "spot business trends, prevent diseases, combat crime and so on".[6] Scientists, business executives, medical practitioners, advertising and governments alike regularly meet difficulties with large data-sets in areas including Internet searches, fintech, healthcare analytics, geographic information systems, urban informatics, and business informatics. Scientists encounter limitations in e-Science work, including meteorology, genomics,[7] connectomics, complex physics simulations, biology, and environmental research.[8] The size and number of available data sets have grown rapidly as data is collected by devices such as mobile devices, cheap and numerous information-sensing Internet of things devices, aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers and wireless sensor networks.[9][10] The world's technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s;[11] as of 2012[update], every day 2.5 exabytes (2.5×260 bytes) of data are generated.[12] Based on an IDC report prediction, the global data volume was predicted to grow exponentially from 4.4 zettabytes to 44 zettabytes between 2013 and 2020. By 2025, IDC predicts there will be 163 zettabytes of data.[13] According to IDC, global spending on big data and business analytics (BDA) solutions is estimated to reach $215.7 billion in 2021.[14][15] While Statista report, the global big data market is forecasted to grow to $103 billion by 2027.[16] In 2011 McKinsey & Company reported, if US healthcare were to use big data creatively and effectively to drive efficiency and quality, the sector could create more than $300 billion in value every year.[17] In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data.[17] And users of services enabled by personal-location data could capture $600 billion in consumer surplus.[17] One question for large enterprises is determining who should own big-data initiatives that affect the entire organization.[18] Relational database management systems and desktop statistical software packages used to visualize data often have difficulty processing and analyzing big data. The processing and analysis of big data may require "massively parallel software running on tens, hundreds, or even thousands of servers".[19] What qualifies as "big data" varies depending on the capabilities of those analyzing it and their tools. Furthermore, expanding capabilities make big data a moving target. "For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration."[20] The term big data has been in use since the 1990s, with some giving credit to John Mashey for popularizing the term.[21][22] Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time.[23] Big data philosophy encompasses unstructured, semi-structured and structured data; however, the main focus is on unstructured data.[24] Big data "size" is a constantly moving target; as of 2012[update] ranging from a few dozen terabytes to many zettabytes of data.[25] Big data requires a set of techniques and technologies with new forms of integration to reveal insights from data-sets that are diverse, complex, and of a massive scale.[26] "Variety", "veracity", and various other "Vs" are added by some organizations to describe it, a revision challenged by some industry authorities.[27] The Vs of big data were often referred to as the "three Vs", "four Vs", and "five Vs". They represented the qualities of big data in volume, variety, velocity, veracity, and value.[3] Variability is often included as an additional quality of big data. A 2018 definition states "Big data is where parallel computing tools are needed to handle data", and notes, "This represents a distinct and clearly defined change in the computer science used, via parallel programming theories, and losses of some of the guarantees and capabilities made by Codd's relational model."[28] In a comparative study of big datasets, Kitchin and McArdle found that none of the commonly considered characteristics of big data appear consistently across all of the analyzed cases.[29] For this reason, other studies identified the redefinition of power dynamics in knowledge discovery as the defining trait.[30] Instead of focusing on intrinsic characteristics of big data, this alternative perspective pushes forward a relational understanding of the object claiming that what matters is the way in which data is collected, stored, made available and analyzed. Big data vs. business intelligenceThe growing maturity of the concept more starkly delineates the difference between "big data" and "business intelligence":[31]
Shows the growth of big data's primary characteristics of volume, velocity, and variety Big data can be described by the following characteristics: Volume The quantity of generated and stored data. The size of the data determines the value and potential insight, and whether it can be considered big data or not. The size of big data is usually larger than terabytes and petabytes.[35]Variety The type and nature of the data. The earlier technologies like RDBMSs were capable to handle structured data efficiently and effectively. However, the change in type and nature from structured to semi-structured or unstructured challenged the existing tools and technologies. The big data technologies evolved with the prime intention to capture, store, and process the semi-structured and unstructured (variety) data generated with high speed (velocity), and huge in size (volume). Later, these tools and technologies were explored and used for handling structured data also but preferable for storage. Eventually, the processing of structured data was still kept as optional, either using big data or traditional RDBMSs. This helps in analyzing data towards effective usage of the hidden insights exposed from the data collected via social media, log files, sensors, etc. Big data draws from text, images, audio, video; plus it completes missing pieces through data fusion.Velocity The speed at which the data is generated and processed to meet the demands and challenges that lie in the path of growth and development. Big data is often available in real-time. Compared to small data, big data is produced more continually. Two kinds of velocity related to big data are the frequency of generation and the frequency of handling, recording, and publishing.[36]Veracity The truthfulness or reliability of the data, which refers to the data quality and the data value.[37] Big data must not only be large in size, but also must be reliable in order to achieve value in the analysis of it. The data quality of captured data can vary greatly, affecting an accurate analysis.[38]Value The worth in information that can be achieved by the processing and analysis of large datasets. Value also can be measured by an assessment of the other qualities of big data.[39] Value may also represent the profitability of information that is retrieved from the analysis of big data.Variability The characteristic of the changing formats, structure, or sources of big data. Big data can include structured, unstructured, or combinations of structured and unstructured data. Big data analysis may integrate raw data from multiple sources. The processing of raw data may also involve transformations of unstructured data to structured data.Other possible characteristics of big data are:[40] Exhaustive Whether the entire system (i.e., n {\textstyle n} =all) is captured or recorded or not. Big data may or may not include all the available data from sources.Fine-grained and uniquely lexical Respectively, the proportion of specific data of each element per element collected and if the element and its characteristics are properly indexed or identified.Relational If the data collected contains common fields that would enable a conjoining, or meta-analysis, of different data sets.Extensional If new fields in each element of the data collected can be added or changed easily.Scalability If the size of the big data storage system can expand rapidly.Big data repositories have existed in many forms, often built by corporations with a special need. Commercial vendors historically offered parallel database management systems for big data beginning in the 1990s. For many years, WinterCorp published the largest database report.[41][promotional source?] Teradata Corporation in 1984 marketed the parallel processing DBC 1012 system. Teradata systems were the first to store and analyze 1 terabyte of data in 1992. Hard disk drives were 2.5 GB in 1991 so the definition of big data continuously evolves. Teradata installed the first petabyte class RDBMS based system in 2007. As of 2017[update], there are a few dozen petabyte class Teradata relational databases installed, the largest of which exceeds 50 PB. Systems up until 2008 were 100% structured relational data. Since then, Teradata has added unstructured data types including XML, JSON, and Avro. In 2000, Seisint Inc. (now LexisNexis Risk Solutions) developed a C++-based distributed platform for data processing and querying known as the HPCC Systems platform. This system automatically partitions, distributes, stores and delivers structured, semi-structured, and unstructured data across multiple commodity servers. Users can write data processing pipelines and queries in a declarative dataflow programming language called ECL. Data analysts working in ECL are not required to define data schemas upfront and can rather focus on the particular problem at hand, reshaping data in the best possible manner as they develop the solution. In 2004, LexisNexis acquired Seisint Inc.[42] and their high-speed parallel processing platform and successfully used this platform to integrate the data systems of Choicepoint Inc. when they acquired that company in 2008.[43] In 2011, the HPCC systems platform was open-sourced under the Apache v2.0 License. CERN and other physics experiments have collected big data sets for many decades, usually analyzed via high-throughput computing rather than the map-reduce architectures usually meant by the current "big data" movement. In 2004, Google published a paper on a process called MapReduce that uses a similar architecture. The MapReduce concept provides a parallel processing model, and an associated implementation was released to process huge amounts of data. With MapReduce, queries are split and distributed across parallel nodes and processed in parallel (the "map" step). The results are then gathered and delivered (the "reduce" step). The framework was very successful,[44] so others wanted to replicate the algorithm. Therefore, an implementation of the MapReduce framework was adopted by an Apache open-source project named "Hadoop".[45] Apache Spark was developed in 2012 in response to limitations in the MapReduce paradigm, as it adds the ability to set up many operations (not just map followed by reducing). MIKE2.0 is an open approach to information management that acknowledges the need for revisions due to big data implications identified in an article titled "Big Data Solution Offering".[46] The methodology addresses handling big data in terms of useful permutations of data sources, complexity in interrelationships, and difficulty in deleting (or modifying) individual records.[47] Studies in 2012 showed that a multiple-layer architecture was one option to address the issues that big data presents. A distributed parallel architecture distributes data across multiple servers; these parallel execution environments can dramatically improve data processing speeds. This type of architecture inserts data into a parallel DBMS, which implements the use of MapReduce and Hadoop frameworks. This type of framework looks to make the processing power transparent to the end-user by using a front-end application server.[48] The data lake allows an organization to shift its focus from centralized control to a shared model to respond to the changing dynamics of information management. This enables quick segregation of data into the data lake, thereby reducing the overhead time.[49][50] A 2011 McKinsey Global Institute report characterizes the main components and ecosystem of big data as follows:[51]
Multidimensional big data can also be represented as OLAP data cubes or, mathematically, tensors. Array database systems have set out to provide storage and high-level query support on this data type. Additional technologies being applied to big data include efficient tensor-based computation,[52] such as multilinear subspace learning,[53] massively parallel-processing (MPP) databases, search-based applications, data mining,[54] distributed file systems, distributed cache (e.g., burst buffer and Memcached), distributed databases, cloud and HPC-based infrastructure (applications, storage and computing resources),[55] and the Internet.[citation needed] Although, many approaches and technologies have been developed, it still remains difficult to carry out machine learning with big data.[56] Some MPP relational databases have the ability to store and manage petabytes of data. Implicit is the ability to load, monitor, back up, and optimize the use of the large data tables in the RDBMS.[57][promotional source?] DARPA's Topological Data Analysis program seeks the fundamental structure of massive data sets and in 2008 the technology went public with the launch of a company called "Ayasdi".[58][third-party source needed] The practitioners of big data analytics processes are generally hostile to slower shared storage,[59] preferring direct-attached storage (DAS) in its various forms from solid state drive (SSD) to high capacity SATA disk buried inside parallel processing nodes. The perception of shared storage architectures—storage area network (SAN) and network-attached storage (NAS)— is that they are relatively slow, complex, and expensive. These qualities are not consistent with big data analytics systems that thrive on system performance, commodity infrastructure, and low cost. Real or near-real-time information delivery is one of the defining characteristics of big data analytics. Latency is therefore avoided whenever and wherever possible. Data in direct-attached memory or disk is good—data on memory or disk at the other end of an FC SAN connection is not. The cost of an SAN at the scale needed for analytics applications is much higher than other storage techniques. Bus wrapped with SAP big data parked outside IDF13. Big data has increased the demand of information management specialists so much so that Software AG, Oracle Corporation, IBM, Microsoft, SAP, EMC, HP, and Dell have spent more than $15 billion on software firms specializing in data management and analytics. In 2010, this industry was worth more than $100 billion and was growing at almost 10 percent a year, about twice as fast as the software business as a whole.[6] Developed economies increasingly use data-intensive technologies. There are 4.6 billion mobile-phone subscriptions worldwide, and between 1 billion and 2 billion people accessing the internet.[6] Between 1990 and 2005, more than 1 billion people worldwide entered the middle class, which means more people became more literate, which in turn led to information growth. The world's effective capacity to exchange information through telecommunication networks was 281 petabytes in 1986, 471 petabytes in 1993, 2.2 exabytes in 2000, 65 exabytes in 2007[11] and predictions put the amount of internet traffic at 667 exabytes annually by 2014.[6] According to one estimate, one-third of the globally stored information is in the form of alphanumeric text and still image data,[60] which is the format most useful for most big data applications. This also shows the potential of yet unused data (i.e. in the form of video and audio content). While many vendors offer off-the-shelf products for big data, experts promote the development of in-house custom-tailored systems if the company has sufficient technical capabilities.[61] GovernmentThe application of Big Data in the legal system, together with analysis techniques, is currently considered one of the possible ways to streamline the administration of justice. The use and adoption of big data within governmental processes allows efficiencies in terms of cost, productivity, and innovation,[62] but does not come without its flaws. Data analysis often requires multiple parts of government (central and local) to work in collaboration and create new and innovative processes to deliver the desired outcome. A common government organization that makes use of big data is the National Security Administration (NSA), which monitors the activities of the Internet constantly in search for potential patterns of suspicious or illegal activities their system may pick up. Civil registration and vital statistics (CRVS) collects all certificates status from birth to death. CRVS is a source of big data for governments. International developmentResearch on the effective usage of information and communication technologies for development (also known as "ICT4D") suggests that big data technology can make important contributions but also present unique challenges to international development.[63][64] Advancements in big data analysis offer cost-effective opportunities to improve decision-making in critical development areas such as health care, employment, economic productivity, crime, security, and natural disaster and resource management.[65][66][67] Additionally, user-generated data offers new opportunities to give the unheard a voice.[68] However, longstanding challenges for developing regions such as inadequate technological infrastructure and economic and human resource scarcity exacerbate existing concerns with big data such as privacy, imperfect methodology, and interoperability issues.[65] The challenge of "big data for development"[65] is currently evolving toward the application of this data through machine learning, known as "artificial intelligence for development (AI4D).[69] BenefitsA major practical application of big data for development has been "fighting poverty with data".[70] In 2015, Blumenstock and colleagues estimated predicted poverty and wealth from mobile phone metadata [71] and in 2016 Jean and colleagues combined satellite imagery and machine learning to predict poverty.[72] Using digital trace data to study the labor market and the digital economy in Latin America, Hilbert and colleagues [73][74] argue that digital trace data has several benefits such as:
ChallengesAt the same time, working with digital trace data instead of traditional survey data does not eliminate the traditional challenges involved when working in the field of international quantitative analysis. Priorities change, but the basic discussions remain the same. Among the main challenges are:
HealthcareBig data analytics was used in healthcare by providing personalized medicine and prescriptive analytics, clinical risk intervention and predictive analytics, waste and care variability reduction, automated external and internal reporting of patient data, standardized medical terms and patient registries.[76][77][78][79] Some areas of improvement are more aspirational than actually implemented. The level of data generated within healthcare systems is not trivial. With the added adoption of mHealth, eHealth and wearable technologies the volume of data will continue to increase. This includes electronic health record data, imaging data, patient generated data, sensor data, and other forms of difficult to process data. There is now an even greater need for such environments to pay greater attention to data and information quality.[80] "Big data very often means 'dirty data' and the fraction of data inaccuracies increases with data volume growth." Human inspection at the big data scale is impossible and there is a desperate need in health service for intelligent tools for accuracy and believability control and handling of information missed.[81] While extensive information in healthcare is now electronic, it fits under the big data umbrella as most is unstructured and difficult to use.[82] The use of big data in healthcare has raised significant ethical challenges ranging from risks for individual rights, privacy and autonomy, to transparency and trust.[83] Big data in health research is particularly promising in terms of exploratory biomedical research, as data-driven analysis can move forward more quickly than hypothesis-driven research.[84] Then, trends seen in data analysis can be tested in traditional, hypothesis-driven follow up biological research and eventually clinical research. A related application sub-area, that heavily relies on big data, within the healthcare field is that of computer-aided diagnosis in medicine. [85] For instance, for epilepsy monitoring it is customary to create 5 to 10 GB of data daily. [86] Similarly, a single uncompressed image of breast tomosynthesis averages 450 MB of data. [87] These are just a few of the many examples where computer-aided diagnosis uses big data. For this reason, big data has been recognized as one of the seven key challenges that computer-aided diagnosis systems need to overcome in order to reach the next level of performance. [88] EducationA McKinsey Global Institute study found a shortage of 1.5 million highly trained data professionals and managers[51] and a number of universities[89][better source needed] including University of Tennessee and UC Berkeley, have created masters programs to meet this demand. Private boot camps have also developed programs to meet that demand, including free programs like The Data Incubator or paid programs like General Assembly.[90] In the specific field of marketing, one of the problems stressed by Wedel and Kannan[91] is that marketing has several sub domains (e.g., advertising, promotions, product development, branding) that all use different types of data. MediaTo understand how the media uses big data, it is first necessary to provide some context into the mechanism used for media process. It has been suggested by Nick Couldry and Joseph Turow that practitioners in media and advertising approach big data as many actionable points of information about millions of individuals. The industry appears to be moving away from the traditional approach of using specific media environments such as newspapers, magazines, or television shows and instead taps into consumers with technologies that reach targeted people at optimal times in optimal locations. The ultimate aim is to serve or convey, a message or content that is (statistically speaking) in line with the consumer's mindset. For example, publishing environments are increasingly tailoring messages (advertisements) and content (articles) to appeal to consumers that have been exclusively gleaned through various data-mining activities.[92]
Channel 4, the British public-service television broadcaster, is a leader in the field of big data and data analysis.[94] InsuranceHealth insurance providers are collecting data on social "determinants of health" such as food and TV consumption, marital status, clothing size, and purchasing habits, from which they make predictions on health costs, in order to spot health issues in their clients. It is controversial whether these predictions are currently being used for pricing.[95] Internet of things (IoT)Big data and the IoT work in conjunction. Data extracted from IoT devices provides a mapping of device inter-connectivity. Such mappings have been used by the media industry, companies, and governments to more accurately target their audience and increase media efficiency. The IoT is also increasingly adopted as a means of gathering sensory data, and this sensory data has been used in medical,[96] manufacturing[97] and transportation[98] contexts. Kevin Ashton, the digital innovation expert who is credited with coining the term,[99] defines the Internet of things in this quote: "If we had computers that knew everything there was to know about things—using data they gathered without any help from us—we would be able to track and count everything, and greatly reduce waste, loss, and cost. We would know when things needed replacing, repairing, or recalling, and whether they were fresh or past their best." Information technologyEspecially since 2015, big data has come to prominence within business operations as a tool to help employees work more efficiently and streamline the collection and distribution of information technology (IT). The use of big data to resolve IT and data collection issues within an enterprise is called IT operations analytics (ITOA).[100] By applying big data principles into the concepts of machine intelligence and deep computing, IT departments can predict potential issues and prevent them.[100] ITOA businesses offer platforms for systems management that bring data silos together and generate insights from the whole of the system rather than from isolated pockets of data.
Examples of uses of big data in public services:
United States
Retail
Science
SportsBig data can be used to improve training and understanding competitors, using sport sensors. It is also possible to predict winners in a match using big data analytics.[140] Future performance of players could be predicted as well. Thus, players' value and salary is determined by data collected throughout the season.[141] In Formula One races, race cars with hundreds of sensors generate terabytes of data. These sensors collect data points from tire pressure to fuel burn efficiency.[142] Based on the data, engineers and data analysts decide whether adjustments should be made in order to win a race. Besides, using big data, race teams try to predict the time they will finish the race beforehand, based on simulations using data collected over the season.[143] Technology
COVID-19During the COVID-19 pandemic, big data was raised as a way to minimise the impact of the disease. Significant applications of big data included minimising the spread of the virus, case identification and development of medical treatment.[149] Governments used big data to track infected people to minimise spread. Early adopters included China, Taiwan, South Korea, and Israel.[150][151][152] Encrypted search and cluster formation in big data were demonstrated in March 2014 at the American Society of Engineering Education. Gautam Siwach engaged at Tackling the challenges of Big Data by MIT Computer Science and Artificial Intelligence Laboratory and Amir Esmailpour at the UNH Research Group investigated the key features of big data as the formation of clusters and their interconnections. They focused on the security of big data and the orientation of the term towards the presence of different types of data in an encrypted form at cloud interface by providing the raw definitions and real-time examples within the technology. Moreover, they proposed an approach for identifying the encoding technique to advance towards an expedited search over encrypted text leading to the security enhancements in big data.[153] In March 2012, The White House announced a national "Big Data Initiative" that consisted of six federal departments and agencies committing more than $200 million to big data research projects.[154] The initiative included a National Science Foundation "Expeditions in Computing" grant of $10 million over five years to the AMPLab[155] at the University of California, Berkeley.[156] The AMPLab also received funds from DARPA, and over a dozen industrial sponsors and uses big data to attack a wide range of problems from predicting traffic congestion[157] to fighting cancer.[158] The White House Big Data Initiative also included a commitment by the Department of Energy to provide $25 million in funding over five years to establish the Scalable Data Management, Analysis and Visualization (SDAV) Institute,[159] led by the Energy Department's Lawrence Berkeley National Laboratory. The SDAV Institute aims to bring together the expertise of six national laboratories and seven universities to develop new tools to help scientists manage and visualize data on the department's supercomputers. The U.S. state of Massachusetts announced the Massachusetts Big Data Initiative in May 2012, which provides funding from the state government and private companies to a variety of research institutions.[160] The Massachusetts Institute of Technology hosts the Intel Science and Technology Center for Big Data in the MIT Computer Science and Artificial Intelligence Laboratory, combining government, corporate, and institutional funding and research efforts.[161] The European Commission is funding the two-year-long Big Data Public Private Forum through their Seventh Framework Program to engage companies, academics and other stakeholders in discussing big data issues. The project aims to define a strategy in terms of research and innovation to guide supporting actions from the European Commission in the successful implementation of the big data economy. Outcomes of this project will be used as input for Horizon 2020, their next framework program.[162] The British government announced in March 2014 the founding of the Alan Turing Institute, named after the computer pioneer and code-breaker, which will focus on new ways to collect and analyze large data sets.[163] At the University of Waterloo Stratford Campus Canadian Open Data Experience (CODE) Inspiration Day, participants demonstrated how using data visualization can increase the understanding and appeal of big data sets and communicate their story to the world.[164] Computational social sciences – Anyone can use application programming interfaces (APIs) provided by big data holders, such as Google and Twitter, to do research in the social and behavioral sciences.[165] Often these APIs are provided for free.[165] Tobias Preis et al. used Google Trends data to demonstrate that Internet users from countries with a higher per capita gross domestic products (GDPs) are more likely to search for information about the future than information about the past. The findings suggest there may be a link between online behaviors and real-world economic indicators.[166][167][168] The authors of the study examined Google queries logs made by ratio of the volume of searches for the coming year (2011) to the volume of searches for the previous year (2009), which they call the "future orientation index".[169] They compared the future orientation index to the per capita GDP of each country, and found a strong tendency for countries where Google users inquire more about the future to have a higher GDP. Tobias Preis and his colleagues Helen Susannah Moat and H. Eugene Stanley introduced a method to identify online precursors for stock market moves, using trading strategies based on search volume data provided by Google Trends.[170] Their analysis of Google search volume for 98 terms of varying financial relevance, published in Scientific Reports,[171] suggests that increases in search volume for financially relevant search terms tend to precede large losses in financial markets.[172][173][174][175][176][177][178] Big data sets come with algorithmic challenges that previously did not exist. Hence, there is seen by some to be a need to fundamentally change the processing ways.[179] The Workshops on Algorithms for Modern Massive Data Sets (MMDS) bring together computer scientists, statisticians, mathematicians, and data analysis practitioners to discuss algorithmic challenges of big data.[180] Regarding big data, such concepts of magnitude are relative. As it is stated "If the past is of any guidance, then today's big data most likely will not be considered as such in the near future."[85] Sampling big dataA research question that is asked about big data sets is whether it is necessary to look at the full data to draw certain conclusions about the properties of the data or if is a sample is good enough. The name big data itself contains a term related to size and this is an important characteristic of big data. But sampling enables the selection of right data points from within the larger data set to estimate the characteristics of the whole population. In manufacturing different types of sensory data such as acoustics, vibration, pressure, current, voltage, and controller data are available at short time intervals. To predict downtime it may not be necessary to look at all the data but a sample may be sufficient. Big data can be broken down by various data point categories such as demographic, psychographic, behavioral, and transactional data. With large sets of data points, marketers are able to create and use more customized segments of consumers for more strategic targeting. There has been some work done in sampling algorithms for big data. A theoretical formulation for sampling Twitter data has been developed.[181] Critiques of the big data paradigm come in two flavors: those that question the implications of the approach itself, and those that question the way it is currently done.[182] One approach to this criticism is the field of critical data studies. Critiques of the big data paradigm"A crucial problem is that we do not know much about the underlying empirical micro-processes that lead to the emergence of the[se] typical network characteristics of Big Data."[23] In their critique, Snijders, Matzat, and Reips point out that often very strong assumptions are made about mathematical properties that may not at all reflect what is really going on at the level of micro-processes. Mark Graham has leveled broad critiques at Chris Anderson's assertion that big data will spell the end of theory:[183] focusing in particular on the notion that big data must always be contextualized in their social, economic, and political contexts.[184] Even as companies invest eight- and nine-figure sums to derive insight from information streaming in from suppliers and customers, less than 40% of employees have sufficiently mature processes and skills to do so. To overcome this insight deficit, big data, no matter how comprehensive or well analyzed, must be complemented by "big judgment", according to an article in the Harvard Business Review.[185] Much in the same line, it has been pointed out that the decisions based on the analysis of big data are inevitably "informed by the world as it was in the past, or, at best, as it currently is".[65] Fed by a large number of data on past experiences, algorithms can predict future development if the future is similar to the past.[186] If the system's dynamics of the future change (if it is not a stationary process), the past can say little about the future. In order to make predictions in changing environments, it would be necessary to have a thorough understanding of the systems dynamic, which requires theory.[186] As a response to this critique Alemany Oliver and Vayre suggest to use "abductive reasoning as a first step in the research process in order to bring context to consumers' digital traces and make new theories emerge".[187] Additionally, it has been suggested to combine big data approaches with computer simulations, such as agent-based models[65] and complex systems. Agent-based models are increasingly getting better in predicting the outcome of social complexities of even unknown future scenarios through computer simulations that are based on a collection of mutually interdependent algorithms.[188][189] Finally, the use of multivariate methods that probe for the latent structure of the data, such as factor analysis and cluster analysis, have proven useful as analytic approaches that go well beyond the bi-variate approaches (e.g. contingency tables) typically employed with smaller data sets. In health and biology, conventional scientific approaches are based on experimentation. For these approaches, the limiting factor is the relevant data that can confirm or refute the initial hypothesis.[190] A new postulate is accepted now in biosciences: the information provided by the data in huge volumes (omics) without prior hypothesis is complementary and sometimes necessary to conventional approaches based on experimentation.[191][192] In the massive approaches it is the formulation of a relevant hypothesis to explain the data that is the limiting factor.[193] The search logic is reversed and the limits of induction ("Glory of Science and Philosophy scandal", C. D. Broad, 1926) are to be considered.[citation needed] Privacy advocates are concerned about the threat to privacy represented by increasing storage and integration of personally identifiable information; expert panels have released various policy recommendations to conform practice to expectations of privacy.[194] The misuse of big data in several cases by media, companies, and even the government has allowed for abolition of trust in almost every fundamental institution holding up society.[195] Nayef Al-Rodhan argues that a new kind of social contract will be needed to protect individual liberties in the context of big data and giant corporations that own vast amounts of information, and that the use of big data should be monitored and better regulated at the national and international levels.[196] Barocas and Nissenbaum argue that one way of protecting individual users is by being informed about the types of information being collected, with whom it is shared, under what constraints and for what purposes.[197] Critiques of the "V" modelThe "V" model of big data is concerning as it centers around computational scalability and lacks in a loss around the perceptibility and understandability of information. This led to the framework of cognitive big data, which characterizes big data applications according to:[198]
Critiques of noveltyLarge data sets have been analyzed by computing machines for well over a century, including the US census analytics performed by IBM's punch-card machines which computed statistics including means and variances of populations across the whole continent. In more recent decades, science experiments such as CERN have produced data on similar scales to current commercial "big data". However, science experiments have tended to analyze their data using specialized custom-built high-performance computing (super-computing) clusters and grids, rather than clouds of cheap commodity computers as in the current commercial wave, implying a difference in both culture and technology stack. Critiques of big data executionUlf-Dietrich Reips and Uwe Matzat wrote in 2014 that big data had become a "fad" in scientific research.[165] Researcher danah boyd has raised concerns about the use of big data in science neglecting principles such as choosing a representative sample by being too concerned about handling the huge amounts of data.[199] This approach may lead to results that have a bias in one way or another.[200] Integration across heterogeneous data resources—some that might be considered big data and others not—presents formidable logistical as well as analytical challenges, but many researchers argue that such integrations are likely to represent the most promising new frontiers in science.[201] In the provocative article "Critical Questions for Big Data",[202] the authors title big data a part of mythology: "large data sets offer a higher form of intelligence and knowledge [...], with the aura of truth, objectivity, and accuracy". Users of big data are often "lost in the sheer volume of numbers", and "working with Big Data is still subjective, and what it quantifies does not necessarily have a closer claim on objective truth".[202] Recent developments in BI domain, such as pro-active reporting especially target improvements in the usability of big data, through automated filtering of non-useful data and correlations.[203] Big structures are full of spurious correlations[204] either because of non-causal coincidences (law of truly large numbers), solely nature of big randomness[205] (Ramsey theory), or existence of non-included factors so the hope, of early experimenters to make large databases of numbers "speak for themselves" and revolutionize scientific method, is questioned.[206] Catherine Tucker has pointed to "hype" around big data, writing "By itself, big data is unlikely to be valuable." The article explains: "The many contexts where data is cheap relative to the cost of retaining talent to process it, suggests that processing skills are more important than data itself in creating value for a firm."[207] Big data analysis is often shallow compared to analysis of smaller data sets.[208] In many big data projects, there is no large data analysis happening, but the challenge is the extract, transform, load part of data pre-processing.[208] Big data is a buzzword and a "vague term",[209][210] but at the same time an "obsession"[210] with entrepreneurs, consultants, scientists, and the media. Big data showcases such as Google Flu Trends failed to deliver good predictions in recent years, overstating the flu outbreaks by a factor of two. Similarly, Academy awards and election predictions solely based on Twitter were more often off than on target. Big data often poses the same challenges as small data; adding more data does not solve problems of bias, but may emphasize other problems. In particular data sources such as Twitter are not representative of the overall population, and results drawn from such sources may then lead to wrong conclusions. Google Translate—which is based on big data statistical analysis of text—does a good job at translating web pages. However, results from specialized domains may be dramatically skewed. On the other hand, big data may also introduce new problems, such as the multiple comparisons problem: simultaneously testing a large set of hypotheses is likely to produce many false results that mistakenly appear significant. Ioannidis argued that "most published research findings are false"[211] due to essentially the same effect: when many scientific teams and researchers each perform many experiments (i.e. process a big amount of scientific data; although not with big data technology), the likelihood of a "significant" result being false grows fast – even more so, when only positive results are published. Furthermore, big data analytics results are only as good as the model on which they are predicated. In an example, big data took part in attempting to predict the results of the 2016 U.S. Presidential Election[212] with varying degrees of success. Critiques of big data policing and surveillanceBig data has been used in policing and surveillance by institutions like law enforcement and corporations.[213] Due to the less visible nature of data-based surveillance as compared to traditional methods of policing, objections to big data policing are less likely to arise. According to Sarah Brayne's Big Data Surveillance: The Case of Policing,[214] big data policing can reproduce existing societal inequalities in three ways:
If these potential problems are not corrected or regulated, the effects of big data policing may continue to shape societal hierarchies. Conscientious usage of big data policing could prevent individual level biases from becoming institutional biases, Brayne also notes.
Page 223andMe, Inc. is a publicly held personal genomics and biotechnology company based in Sunnyvale, California. It is best known for providing a direct-to-consumer genetic testing service in which customers provide a saliva sample that is laboratory analysed, using single nucleotide polymorphism genotyping,[1] to generate reports relating to the customer's ancestry and genetic predispositions to health-related topics. The company's name is derived from the 23 pairs of chromosomes in a wild-type human cell.[2] Traded as Mountain View, California United States Key people Anne Wojcicki (CEO)Esther Dyson (board member)ProductsDirect-to-consumer personal genome testing Mobile applicationServicesGenetic testing, genealogical DNA testing, medical researchRevenue US$259.92 million (2021) Number of employees 683 (2019)Website23andme.comThe company had a previously fraught relationship with the United States Food and Drug Administration (FDA) due to its genetic health tests; as of October 2015, DNA tests ordered in the US include a revised health component, per FDA approval.[3][4] 23andMe has been selling a product with both ancestry and health-related components in Canada since October 2014,[5][6][7] and in the UK since December 2014.[8] In 2007, 23andMe became the first company to begin offering autosomal DNA testing for ancestry, which all other major companies now use.[9] Its saliva-based direct-to-consumer genetic testing business was named "Invention of the Year" by Time in 2008.[10][11][12] Linda Avey, Paul Cusenza and Anne Wojcicki founded 23andMe in 2006 to offer genetic testing and interpretation to individuals.[1] Investment documents from 2007 also suggest that 23andMe hoped to develop a database to pursue research efforts.[13] In 2007, Google invested $3.9 million in the company, along with Genentech, New Enterprise Associates, and Mohr Davidow Ventures.[14] Wojcicki and Google co-founder Sergey Brin were married at the time.[6] In 2007, Cusenza left to join Nodal Exchange as CEO the following year.[15] Avey left in 2009 and co-founded Curious, Inc. in 2011.[16] In 2012, 23andMe raised $50 million in a Series D venture round, almost doubling its capital of $52.6 million.[17] In 2015, 23andMe raised $115 million in a Series E offering, increasing capital to $241 million.[4][18][19] In June 2017, 23andMe created a brand marketing advertisement featuring Gru from Despicable Me.[20] In 2018, the company launched advertisements narrated by Warren Buffett.[21] In September 2017, it was rumored the company was raising another $200 million with a $1.5 billion valuation. As of that time the company raised $230 million since its inception.[22] Afterwards, it was reported the company raised $250 million, at a $1.75 billion valuation.[23] On July 25, 2018, 23andMe announced it a partnership with GlaxoSmithKline to allow the pharmaceutical company to use test results from 5 million customers to design new drugs. GlaxoSmithKline invested $300 million in the company.[24] In January 2022, this partnership was extended until July 2023 with an additional $50 million payment from GlaxoSmithKline.[25] In January 2020, 23andMe announced it would lay off about 100 of its employees.[26] In July 2020, 23andMe and GlaxoSmithKline announced their partnership's first clinical trial: a joint asset being co-developed by the two companies for cancer treatment.[27] In December 2020, the company raised around $82.5 million in a series F round, bringing the total raised over the years to over $850M. The post-money valuation was not reported.[28] In February 2021, the company announced that it had entered into a definitive agreement to merge with Sir Richard Branson's special-purpose acquisition company, VG Acquisition Corp, in a $3.5 billion transaction.[29] In June 2021, the company completed the merger with VG Acquisition Corp. The combined company was renamed to 23andMe Holding Co. and began trading on the Nasdaq stock exchange on June 17, 2021 under the ticker symbol “ME”.[30] In October 2021, 23andMe announced that it would acquire Lemonaid Health, a telehealth company, for $400 million with the deal closing in November.[31][32] The new genetic testing service and ability to map significant portions of the genome has raised controversial questions, including whether the results can be interpreted meaningfully and whether they will lead to genetic discrimination.[10][33] The regulatory environment for genetic testing companies has been uncertain, and anticipated risk-based regulation catering for different types of genetic tests has not yet materialized.[34][35][36] State regulatorsIn 2008, the states of New York and California each provided notice to 23andMe and similar companies, that they needed to obtain a CLIA license in order to sell tests in those states.[10][37][38] By August 2008, 23andMe had received licenses that allow them to continue to do business in California.[39] FDAAccording to Anne Wojcicki, 23andMe had been in dialogue with the FDA since 2008.[36] In 2010, the FDA notified several genetic testing companies, including 23andMe, that their genetic tests are considered medical devices and federal approval is required to market them; a similar letter was sent to Illumina, which makes the instruments and chips used by 23andMe in providing its service.[34][40][41] 23andMe first submitted applications for FDA clearance in July and September 2012.[42] In November 2013, the FDA published guidance on how it classified genetic analysis and testing services offered by companies using instruments and chips labelled for "research use only" and instruments and chips that had been approved for clinical use.[43] At around the same time, after not hearing from 23andMe for six months, the FDA ordered 23andMe to stop marketing its saliva collection kit and personal genome service (PGS), as 23andMe had not demonstrated that they have "analytically or clinically validated the PGS for its intended uses" and that the "FDA is concerned about the public health consequences of inaccurate results from the PGS device".[42][44][45] As of December 2, 2013[update], 23andMe had stopped all advertisements for its PGS test but is still selling the product.[46][47] As of December 5, 2013[update], 23andMe was selling only raw genetic data and ancestry-related results.[3][48][49] 23andMe publicly responded to media reports on November 25, 2013, stating, "We recognize that we have not met the FDA's expectations regarding timeline and communication regarding our submission. Our relationship with the FDA is extremely important to us and we are committed to fully engaging with them to address their concerns."[50][51][52] CEO Anne Wojcicki subsequently posted an update on the 23andMe website, stating: "This is new territory for both 23andMe and the FDA. This makes the regulatory process with the FDA important because the work we are doing with the agency will help lay the groundwork for what other companies in this new industry do in the future. It will also provide important reassurance to the public that the process and science behind the service meet the rigorous standards required by those entrusted with the public's safety."[36] On December 5, 2013, 23andMe announced that it had suspended health-related genetic tests for customers who purchased the test from November 22, 2013 in order to comply with the FDA warning letter, while undergoing regulatory review.[3][48][49] In May 2014, it was reported that 23andMe was exploring alternative locations abroad, including Canada, Australia, and the United Kingdom, in which to offer its full genetic testing service.[53] 23andMe had been selling a product with both ancestry and health-related components in Canada since October 2014,[5][6][7] and in the UK since December 2014.[8] In 2014, 23andMe submitted a 510(k) application to the FDA to market a carrier test for Bloom syndrome, which included data showing that 23andme's results were consistent and reliable and that the saliva collection kit and instructions were easy enough for people to use without making mistakes that might affect their results, and included citations to the scientific literature showing that the specific tests that 23andMe offered were associated with Blooms.[54][55] The FDA cleared the test in February 2015; in the clearance notice, the FDA said that it would not require similar applications for other carrier tests from 23andMe.[54][56] The FDA sent further clarification about regulation of the test to 23andMe on October 1, 2015.[57] On October 21, 2015, 23andMe announced that it would begin marketing carrier tests in the US again.[4] Wojcicki said, "There was part of us that didn’t understand how the regulatory environment works" in regards to the distributed laboratory regulatory functions of FDA and Centers for Medicare and Medicaid Service (CMS).[58] 23andMe submitted a "de novo" application to the FDA to market tests that provide people with information about whether they have gene mutations or alleles that put them at risk for getting or having certain diseases; the applications included data showing that 23andMe's results were consistent and reliable. In April 2017, the FDA approved the applications for ten tests: late-onset Alzheimer's disease, Parkinson's disease, celiac disease, hereditary thrombophilia, alpha-1 antitrypsin deficiency, glucose-6-phosphate dehydrogenase deficiency, early-onset of dystonia, factor XI deficiency, and Gaucher's disease.[59][60] The FDA also said that it intended to exempt further 23andMe genetic risk tests from the needing 510(k) applications, and it clarified that it was only approving genetic risk tests, not diagnostic tests.[59] In March 2018, the FDA approved another de novo application from the company, this one for a DTC test for three specific BRCA mutations that are the most common BRCA mutations in people of Ashkenazi descent; they are not however the most common BRCA mutations in the general population, and the test is only for three of the approximately 1,000 known mutations.[61] These mutations increase the risk of breast and ovarian cancer in women, and the risk of breast and prostate cancer in men.[62] A 23andMe 2021 genome testing kit A 23andMe 2013 genome testing kit 23andMe began offering direct-to-consumer genetic testing in November 2007. Customers provide a saliva testing sample that is partially single nucleotide polymorphism (SNP) genotyped and results are posted online.[1][63][64] In 2008, when the company was offering estimates of "predisposition for more than 90 traits and conditions ranging from baldness to blindness", Time magazine named the product "Invention of the Year".[10] After the sample is received by the lab, the DNA is extracted from the saliva and amplified so that there is enough to be genotyped. The DNA is then cut into small pieces, and applied to a glass microarray chip, which has many microscopic beads applied to its surface. Each bead has a gene probe on it that matches the DNA of one of the many variants the company test for. If the sample has a match in the microarray, the sequences will hybridize, or bind together, letting researchers know that this variant is present in the customer's genome by a fluorescent label located on the probes. Tens of thousands of variants are tested out of the 10 to 30 million located in the entire genome. These matches are then compiled into a report that is supplied to the customer, allowing them to know if the variants associated with certain diseases, such as Parkinson's, celiac and Alzheimer's, are present in their own genome.[65] Uninterpreted raw genetic data may be downloaded by customers.[33] This provides customers with the ability to choose one of the 23 chromosomes, as well as mitochondrial DNA, and see which base is located in certain positions in genes, and see how these compare to other common variants. Customers who bought tests with an ancestry-related component have online access to genealogical DNA test results and tools, including a relative-matching database. Customers can also view their mitochondrial haplogroup (maternal) and, if they are male or a relative shared a patriline that has also been tested, Y chromosome haplogroup (paternal). US customers who bought tests with a health-related component and received health-related results before November 22, 2013 have online access to an assessment of inherited traits and genetic disorder risks.[3][66][67] Health-related results for US customers who purchased the test from November 22, 2013 were suspended until late 2015 while undergoing an FDA regulatory review.[4][48][49] Customers who bought tests from 23andMe's Canadian and UK locations have access to some, but not all, health-related results.[5][8] As of February 2018, 23andMe has genotyped over 3,000,000 individuals.[68] FDA marketing restrictions reduced customer growth rates.[69] 23andMe is commonly used for donor conceived people to find their biological siblings and in some cases their sperm or egg donor.[70] Product changesIn late 2009, 23andMe split its genotyping service into three products with different prices: an Ancestry Edition, a Health edition, and a Complete Edition.[71] This decision was reversed a year later, when the different products were recombined.[34] In late 2010, the company introduced a monthly subscription fee for updates based on new medical research findings.[34][72] The subscription model proved unpopular with customers and was eliminated in mid-2012.[73] 23andMe only sold raw genetic data and ancestry-related results in the US due to FDA restrictions from November 22, 2013 until October 21, 2015,[3][48][49] when it announced that it would resume providing health information in the form of carrier status and wellness reports with FDA approval.[74] Wojcicki said they still plan to report on disease risk, subject to future FDA approval.[4] The price of the full direct-to-consumer testing service in the US reduced from $999 in 2007 to $399 in 2008[75] and to $99 in 2012,[17] and was effectively being sold as a loss leader in order to build a valuable customer database.[33][76][77] In October 2015, the US price was raised to $199.[74] In September 2016, an ancestry-only version was once again offered at a lower price of $99 with an option to upgrade to include the health component for an additional $125 later.[78] The initial price of the product sold in Canada from October 2014, which includes health-related results, was CA$99.[5][6] The initial price of the product sold in the UK from December 2014, which includes health-related results, was £125.[79] In February 2018, 23andMe announced that its ancestry reporting would tell people what country they were from, not just what region, and increased the number of regions by 120. Like other companies, it still lacked data about Asia and Africa, which the African Genetics Program (launched in October 2016 with a grant from the US National Institutes of Health) will rectify by recruiting sub-Saharan Africans to increase the genomic data on racial and ethnic minorities.[80][81] Building off of the African Genetics Program, the Global Genetics Program was also announced in February 2018. This program aims to increase the genomic data of 61 underrepresented countries in their database by providing free tests to individuals that have all 4 grandparents from one of the countries. In April 2018, 23andMe announced the Populations Collaboration Program, which sets up formal collaborations between the company and researchers that are investigating underrepresented countries.[82] Additional servicesSince October 1, 2020, the company offers a new service called "23andMe+", priced at $29/year, for the customers of the "Health + Ancestry" service, who completed genotyping on version 5 of the microarray chip used by the company. The new service makes available additional reports on health and pharmacogenetics, and commits to provide ongoing new reports and features.[83] Lemonaid Health acquisitionAt the end of 2021 23andMe acquired leading digital healthcare company Lemonaid Health for $400m to "...give patients and healthcare providers better information about health risks and treatment". Paul Johnson, CEO and co-founder of Lemonaid Health became COO of the 23andMe consumer business.[84][85][86] Instrument and chip versionsUp until 2010, Illumina only sold instruments that were labeled "for research use only"; in early 2010, Illumina obtained FDA approval for its BeadXpress system to be used in clinical tests.[87][88] In June 2020, 23andMe published results from a study that claimed that people with type O blood may be at lower risk of catching COVID-19. Out of more than 750,000 participants, those with type O blood were 9–18% less likely to contract the virus, while those who had been exposed were 13–26% less likely to test positive. The study is ongoing and has not been peer-reviewed.[89][90][91] Some customers comparing 23andMe ancestry results to other genomic and ancestry testing companies have received differing results, possibly due to human error, or the differing analysis of the extracted DNA due to overrepresentation of one country or region over another in the database.[92] Ancestry results are based on the amount of confidence the company has that the DNA is from a specific region, leading to both specific countries when the confidence is high, and broad regions when the confidence is low. This can lead to surprising results due to specific countries getting masked by low confidence in the DNA.[93] In August 2018, the company said it was broadening its coverage of Africa and East Asia.[94] The possibility of false positives also adds to customer confusion and unnecessary concerns when interpreting results.[95] 2019 research from the University of Southampton used the company as an example of direct-to-consumer tests that emphasize "breadth over detail", in one case only checking a few variants of a particular cancer-causing gene instead of the possible thousands, and said that such tests were generally unreliable.[96] In 2019, identical twin sisters Charlsie and Carly Agro both took DNA tests from 23andMe, AncestryDNA, MyHeritage, FamilyTreeDNA, and LivingDNA, and found that the results from one sister did not match the results of the other sister.[97] While Charlsie's test resulted in 38% Italian, 28% Eastern European, 15% Balkan, around 3% Broadly European, 13% Others, and 2.6% French and German, Carly's test resulted in 37% Italian, 25% Eastern European, 14% Balkan, 13% Broadly European, and 12% Others. Carly's test did not detect French and German ancestry, while Charlsie's did. Carly's test did however discover Polish ancestry, under the Eastern European category, which came up undetected in Charlsie's results. Charlsie Agro had asked Mark Gerstein, a computational biologist from Yale University, to analyze both her and her twin sister's raw DNA data beforehand; his team stated that the results should be identical, as their DNA were "statistically the same".[98] While the results were not identical, they were very similar to each other, which raised the question as to whether or not DNA ancestry tests are 100% accurate. Although the machines that process human DNA are highly accurate, errors can occur, and human errors can also cause unexpected results. Questions have been raised since at least 2013 as to whether the company can obtain informed consent through its web-based interactions with people who want to submit samples for sequencing.[99][100] The company collects not only genetic and personal information from customers who order DNA tests, but also data about other web behavior information that 23andMe captures through the use of its website, products, software, cookies, and through its smartphone app.[101][102] A combination of several individual policies within the terms of service and privacy policy (cookies, disclosure of aggregate data, targeted advertising) makes 23andMe a valuable data mine for third parties such as health insurance companies, pharmaceutical companies, advertising companies, biotechnology companies, law enforcement, or other interested parties.[103][104][105] People may not actually be aware of how the company uses the data, and there are always risks of data breaches.[106][107] United StatesDepending on which state an individual resides in, 23andMe must follow that state's laws regarding privacy and disclosing information. Since 23andMe is not a medical provider the company does not have to abide by standard privacy policies that must be followed at a doctor's office, such as the Health Insurance Portability and Accountability Act (HIPAA).[107] Research by Deloitte has shown that only 9% of consumers actually read the terms and conditions, and research from ProPrivacy concluded that only 1% of consumers read the policies, which suggests that consent to be included in research may have been given without full knowledge of the permissions being given.[108] In addition, 23andMe's privacy policy can be confusing for consumers to understand.[109] Despite confusion, 23andMe’s informed consent practices are IRB-approved. Several sections of the privacy policy allows data to be disclosed to third parties, regardless whether the consent is signed:
The Genetic Information Nondiscrimination Act (GINA) protects a person against discrimination based on genetic information by their employer(s) or insurance companies in most situations. However, GINA does not extend to discrimination based on genetic information for long-term care or disability-insurance providers. European UnionEffective as of 25 May 2018, 23andMe must abide by the General Data Protection Regulation (GDPR).[111][112] The GDPR is a set of rules/regulations that helps an individual take control of their data information that is collected, used and stored digitally or in a structured filing system on paper, and restricts a company's use of personal data.[112] The regulation also applies to companies who offer products/services outside of the EU.[112] Medical researchAggregated customer data is studied by scientific researchers employed by 23andMe for research on inherited disorders; rights to use customers' data is also sold to pharmaceutical and biotechnology companies for use in their research.[4][33][113] The company also collaborates with academic and government scientists.[114][69] In July 2012, 23andMe acquired the startup CureTogether, a crowdsourced treatment ratings website with data on over 600 medical conditions.[115] 23andMe has an optional consent that enables the individual's genetic information to be included in medical research that may be published in a scientific journal. However, if an individual chooses not to consent for their 'personal information' to be used, their 'genetic information' and 'self-reported information' may still be used and shared with the company's third party service providers.[101][105] In 2010, 23andMe said that it was able to use its database to validate work published by the NIH: identifying mutations in the gene that codes for glucocerebrosidase as a risk factor for Parkinson's disease.[114] In 2015, 23andMe made a business decision to pursue drug discovery themselves, under the direction of former Genentech executive Richard Scheller.[4][116] One of their main focuses is Parkinson's disease, and they are utilizing the 23andMe database to search for rare variants associated with Parkinson's in the hope of developing a drug for the disease. The company also set up research agreements with the pharmaceutical company Pfizer to explore the genetic causes of inflammatory bowel disease, namely ulcerative colitis and Crohn's disease.[117][118] In 2016, a project that the company was developing to provide customers with next generation sequencing was ended, because of the fear that the results would be too complicated or vague to fit into the company's goal of providing useful information, both quickly and precisely, directly to consumers, according to Wojcicki.[119] Also in 2016, 23andMe used self-reported data from customers to locate 17 genetic loci that seem to be associated with depression.[120] In 2017, 23andMe, the Lundbeck pharmaceutical company, and the Milken Institute think tank started collaborations to focus on psychiatric disorders, such as bipolar disorder and major depression. Their goals are to determine the genetic roots of such disorders, as well as pursue drug discovery in those areas.[121] Use by law enforcement23andMe does not have a history of allowing its genetic profiles to be used by law enforcement to solve crimes, believing that it violates users' privacy.[122][123] As of February 15, 2019, 23andMe has denied data requests by law enforcement on six separate occasions.[123] However, according to section 8 of the terms of service, "23andMe is free to preserve and disclose any and all Personal Information to law enforcement agencies or others if required to do so by law or in the good faith belief that such preservation or disclosure is reasonably necessary."[124] The information 23andMe collects from users is very personal. Overall, the privacy policies are very clear on their website, with a table of contents, easy structure and language, easy access and precise explanations of how data is collected, used, and stored. It also explains how users can access their data, change and delete their data and how to contact them with any concerns. Although these privacy policies are straightforward and easy to understand, there are some questionable sections and components. For instance, on Ancestry.com, they use genetic information not only to provide users with their DNA kit results, but also to conduct “scientific, statistical, and historical research” and “to better understand population and ethnicity-related health, wellness, aging, or physical conditions”. They ask users for permission before their data is used for research, but many users do not pay attention to the privacy policy and do not realize what they are agreeing to. Over 5 million 23andMe customers have opted in for their data being used in research. [125] [126] In at least one case, 23andMe was used to identify the remains of a crime victim.
|