What is big data a collection of large complex data sets including structured and unstructured data that Cannot be analyzed using traditional database methods and tools?

Big data refers to data sets that are too large or complex to be dealt with by traditional data-processing application software. Data with many fields (rows) offer greater statistical power, while data with higher complexity (more attributes or columns) may lead to a higher false discovery rate.[2] Big data analysis challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating, information privacy, and data source. Big data was originally associated with three key concepts: volume, variety, and velocity.[3] The analysis of big data presents challenges in sampling, and thus previously allowing for only observations and sampling. Thus a fourth concept, veracity, refers to the quality or insightfulness of the data. Without sufficient investment in expertise for big data veracity, then the volume and variety of data can produce costs and risks that exceed an organization's capacity to create and capture value from big data.[4]

What is big data a collection of large complex data sets including structured and unstructured data that Cannot be analyzed using traditional database methods and tools?

Non-linear growth of digital global information-storage capacity and the waning of analog storage[1]

Current usage of the term big data tends to refer to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from big data, and seldom to a particular size of data set. "There is little doubt that the quantities of data now available are indeed large, but that's not the most relevant characteristic of this new data ecosystem."[5] Analysis of data sets can find new correlations to "spot business trends, prevent diseases, combat crime and so on".[6] Scientists, business executives, medical practitioners, advertising and governments alike regularly meet difficulties with large data-sets in areas including Internet searches, fintech, healthcare analytics, geographic information systems, urban informatics, and business informatics. Scientists encounter limitations in e-Science work, including meteorology, genomics,[7] connectomics, complex physics simulations, biology, and environmental research.[8]

The size and number of available data sets have grown rapidly as data is collected by devices such as mobile devices, cheap and numerous information-sensing Internet of things devices, aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers and wireless sensor networks.[9][10] The world's technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s;[11] as of 2012[update], every day 2.5 exabytes (2.5×260 bytes) of data are generated.[12] Based on an IDC report prediction, the global data volume was predicted to grow exponentially from 4.4 zettabytes to 44 zettabytes between 2013 and 2020. By 2025, IDC predicts there will be 163 zettabytes of data.[13] According to IDC, global spending on big data and business analytics (BDA) solutions is estimated to reach $215.7 billion in 2021.[14][15] While Statista report, the global big data market is forecasted to grow to $103 billion by 2027.[16] In 2011 McKinsey & Company reported, if US healthcare were to use big data creatively and effectively to drive efficiency and quality, the sector could create more than $300 billion in value every year.[17] In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data.[17] And users of services enabled by personal-location data could capture $600 billion in consumer surplus.[17] One question for large enterprises is determining who should own big-data initiatives that affect the entire organization.[18]

Relational database management systems and desktop statistical software packages used to visualize data often have difficulty processing and analyzing big data. The processing and analysis of big data may require "massively parallel software running on tens, hundreds, or even thousands of servers".[19] What qualifies as "big data" varies depending on the capabilities of those analyzing it and their tools. Furthermore, expanding capabilities make big data a moving target. "For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration."[20]

The term big data has been in use since the 1990s, with some giving credit to John Mashey for popularizing the term.[21][22] Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time.[23] Big data philosophy encompasses unstructured, semi-structured and structured data; however, the main focus is on unstructured data.[24] Big data "size" is a constantly moving target; as of 2012[update] ranging from a few dozen terabytes to many zettabytes of data.[25] Big data requires a set of techniques and technologies with new forms of integration to reveal insights from data-sets that are diverse, complex, and of a massive scale.[26]

"Variety", "veracity", and various other "Vs" are added by some organizations to describe it, a revision challenged by some industry authorities.[27] The Vs of big data were often referred to as the "three Vs", "four Vs", and "five Vs". They represented the qualities of big data in volume, variety, velocity, veracity, and value.[3] Variability is often included as an additional quality of big data.

A 2018 definition states "Big data is where parallel computing tools are needed to handle data", and notes, "This represents a distinct and clearly defined change in the computer science used, via parallel programming theories, and losses of some of the guarantees and capabilities made by Codd's relational model."[28]

In a comparative study of big datasets, Kitchin and McArdle found that none of the commonly considered characteristics of big data appear consistently across all of the analyzed cases.[29] For this reason, other studies identified the redefinition of power dynamics in knowledge discovery as the defining trait.[30] Instead of focusing on intrinsic characteristics of big data, this alternative perspective pushes forward a relational understanding of the object claiming that what matters is the way in which data is collected, stored, made available and analyzed.

Big data vs. business intelligence

The growing maturity of the concept more starkly delineates the difference between "big data" and "business intelligence":[31]

  • Business intelligence uses applied mathematics tools and descriptive statistics with data with high information density to measure things, detect trends, etc.
  • Big data uses mathematical analysis, optimization, inductive statistics, and concepts from nonlinear system identification[32] to infer laws (regressions, nonlinear relationships, and causal effects) from large sets of data with low information density[33] to reveal relationships and dependencies, or to perform predictions of outcomes and behaviors.[32][34][promotional source?]


Shows the growth of big data's primary characteristics of volume, velocity, and variety

Big data can be described by the following characteristics:

Volume The quantity of generated and stored data. The size of the data determines the value and potential insight, and whether it can be considered big data or not. The size of big data is usually larger than terabytes and petabytes.[35]Variety The type and nature of the data. The earlier technologies like RDBMSs were capable to handle structured data efficiently and effectively. However, the change in type and nature from structured to semi-structured or unstructured challenged the existing tools and technologies. The big data technologies evolved with the prime intention to capture, store, and process the semi-structured and unstructured (variety) data generated with high speed (velocity), and huge in size (volume). Later, these tools and technologies were explored and used for handling structured data also but preferable for storage. Eventually, the processing of structured data was still kept as optional, either using big data or traditional RDBMSs. This helps in analyzing data towards effective usage of the hidden insights exposed from the data collected via social media, log files, sensors, etc. Big data draws from text, images, audio, video; plus it completes missing pieces through data fusion.Velocity The speed at which the data is generated and processed to meet the demands and challenges that lie in the path of growth and development. Big data is often available in real-time. Compared to small data, big data is produced more continually. Two kinds of velocity related to big data are the frequency of generation and the frequency of handling, recording, and publishing.[36]Veracity The truthfulness or reliability of the data, which refers to the data quality and the data value.[37] Big data must not only be large in size, but also must be reliable in order to achieve value in the analysis of it. The data quality of captured data can vary greatly, affecting an accurate analysis.[38]Value The worth in information that can be achieved by the processing and analysis of large datasets. Value also can be measured by an assessment of the other qualities of big data.[39] Value may also represent the profitability of information that is retrieved from the analysis of big data.Variability The characteristic of the changing formats, structure, or sources of big data. Big data can include structured, unstructured, or combinations of structured and unstructured data. Big data analysis may integrate raw data from multiple sources. The processing of raw data may also involve transformations of unstructured data to structured data.

Other possible characteristics of big data are:[40]

Exhaustive Whether the entire system (i.e., n {\textstyle n}  =all) is captured or recorded or not. Big data may or may not include all the available data from sources.Fine-grained and uniquely lexical Respectively, the proportion of specific data of each element per element collected and if the element and its characteristics are properly indexed or identified.Relational If the data collected contains common fields that would enable a conjoining, or meta-analysis, of different data sets.Extensional If new fields in each element of the data collected can be added or changed easily.Scalability If the size of the big data storage system can expand rapidly.

Big data repositories have existed in many forms, often built by corporations with a special need. Commercial vendors historically offered parallel database management systems for big data beginning in the 1990s. For many years, WinterCorp published the largest database report.[41][promotional source?]

Teradata Corporation in 1984 marketed the parallel processing DBC 1012 system. Teradata systems were the first to store and analyze 1 terabyte of data in 1992. Hard disk drives were 2.5 GB in 1991 so the definition of big data continuously evolves. Teradata installed the first petabyte class RDBMS based system in 2007. As of 2017[update], there are a few dozen petabyte class Teradata relational databases installed, the largest of which exceeds 50 PB. Systems up until 2008 were 100% structured relational data. Since then, Teradata has added unstructured data types including XML, JSON, and Avro.

In 2000, Seisint Inc. (now LexisNexis Risk Solutions) developed a C++-based distributed platform for data processing and querying known as the HPCC Systems platform. This system automatically partitions, distributes, stores and delivers structured, semi-structured, and unstructured data across multiple commodity servers. Users can write data processing pipelines and queries in a declarative dataflow programming language called ECL. Data analysts working in ECL are not required to define data schemas upfront and can rather focus on the particular problem at hand, reshaping data in the best possible manner as they develop the solution. In 2004, LexisNexis acquired Seisint Inc.[42] and their high-speed parallel processing platform and successfully used this platform to integrate the data systems of Choicepoint Inc. when they acquired that company in 2008.[43] In 2011, the HPCC systems platform was open-sourced under the Apache v2.0 License.

CERN and other physics experiments have collected big data sets for many decades, usually analyzed via high-throughput computing rather than the map-reduce architectures usually meant by the current "big data" movement.

In 2004, Google published a paper on a process called MapReduce that uses a similar architecture. The MapReduce concept provides a parallel processing model, and an associated implementation was released to process huge amounts of data. With MapReduce, queries are split and distributed across parallel nodes and processed in parallel (the "map" step). The results are then gathered and delivered (the "reduce" step). The framework was very successful,[44] so others wanted to replicate the algorithm. Therefore, an implementation of the MapReduce framework was adopted by an Apache open-source project named "Hadoop".[45] Apache Spark was developed in 2012 in response to limitations in the MapReduce paradigm, as it adds the ability to set up many operations (not just map followed by reducing).

MIKE2.0 is an open approach to information management that acknowledges the need for revisions due to big data implications identified in an article titled "Big Data Solution Offering".[46] The methodology addresses handling big data in terms of useful permutations of data sources, complexity in interrelationships, and difficulty in deleting (or modifying) individual records.[47]

Studies in 2012 showed that a multiple-layer architecture was one option to address the issues that big data presents. A distributed parallel architecture distributes data across multiple servers; these parallel execution environments can dramatically improve data processing speeds. This type of architecture inserts data into a parallel DBMS, which implements the use of MapReduce and Hadoop frameworks. This type of framework looks to make the processing power transparent to the end-user by using a front-end application server.[48]

The data lake allows an organization to shift its focus from centralized control to a shared model to respond to the changing dynamics of information management. This enables quick segregation of data into the data lake, thereby reducing the overhead time.[49][50]

A 2011 McKinsey Global Institute report characterizes the main components and ecosystem of big data as follows:[51]

  • Techniques for analyzing data, such as A/B testing, machine learning, and natural language processing
  • Big data technologies, like business intelligence, cloud computing, and databases
  • Visualization, such as charts, graphs, and other displays of the data

Multidimensional big data can also be represented as OLAP data cubes or, mathematically, tensors. Array database systems have set out to provide storage and high-level query support on this data type. Additional technologies being applied to big data include efficient tensor-based computation,[52] such as multilinear subspace learning,[53] massively parallel-processing (MPP) databases, search-based applications, data mining,[54] distributed file systems, distributed cache (e.g., burst buffer and Memcached), distributed databases, cloud and HPC-based infrastructure (applications, storage and computing resources),[55] and the Internet.[citation needed] Although, many approaches and technologies have been developed, it still remains difficult to carry out machine learning with big data.[56]

Some MPP relational databases have the ability to store and manage petabytes of data. Implicit is the ability to load, monitor, back up, and optimize the use of the large data tables in the RDBMS.[57][promotional source?]

DARPA's Topological Data Analysis program seeks the fundamental structure of massive data sets and in 2008 the technology went public with the launch of a company called "Ayasdi".[58][third-party source needed]

The practitioners of big data analytics processes are generally hostile to slower shared storage,[59] preferring direct-attached storage (DAS) in its various forms from solid state drive (SSD) to high capacity SATA disk buried inside parallel processing nodes. The perception of shared storage architectures—storage area network (SAN) and network-attached storage (NAS)— is that they are relatively slow, complex, and expensive. These qualities are not consistent with big data analytics systems that thrive on system performance, commodity infrastructure, and low cost.

Real or near-real-time information delivery is one of the defining characteristics of big data analytics. Latency is therefore avoided whenever and wherever possible. Data in direct-attached memory or disk is good—data on memory or disk at the other end of an FC SAN connection is not. The cost of an SAN at the scale needed for analytics applications is much higher than other storage techniques.


Bus wrapped with SAP big data parked outside IDF13.

Big data has increased the demand of information management specialists so much so that Software AG, Oracle Corporation, IBM, Microsoft, SAP, EMC, HP, and Dell have spent more than $15 billion on software firms specializing in data management and analytics. In 2010, this industry was worth more than $100 billion and was growing at almost 10 percent a year, about twice as fast as the software business as a whole.[6]

Developed economies increasingly use data-intensive technologies. There are 4.6 billion mobile-phone subscriptions worldwide, and between 1 billion and 2 billion people accessing the internet.[6] Between 1990 and 2005, more than 1 billion people worldwide entered the middle class, which means more people became more literate, which in turn led to information growth. The world's effective capacity to exchange information through telecommunication networks was 281 petabytes in 1986, 471 petabytes in 1993, 2.2 exabytes in 2000, 65 exabytes in 2007[11] and predictions put the amount of internet traffic at 667 exabytes annually by 2014.[6] According to one estimate, one-third of the globally stored information is in the form of alphanumeric text and still image data,[60] which is the format most useful for most big data applications. This also shows the potential of yet unused data (i.e. in the form of video and audio content).

While many vendors offer off-the-shelf products for big data, experts promote the development of in-house custom-tailored systems if the company has sufficient technical capabilities.[61]



The application of Big Data in the legal system, together with analysis techniques, is currently considered one of the possible ways to streamline the administration of justice.

The use and adoption of big data within governmental processes allows efficiencies in terms of cost, productivity, and innovation,[62] but does not come without its flaws. Data analysis often requires multiple parts of government (central and local) to work in collaboration and create new and innovative processes to deliver the desired outcome. A common government organization that makes use of big data is the National Security Administration (NSA), which monitors the activities of the Internet constantly in search for potential patterns of suspicious or illegal activities their system may pick up.

Civil registration and vital statistics (CRVS) collects all certificates status from birth to death. CRVS is a source of big data for governments.

International development

Research on the effective usage of information and communication technologies for development (also known as "ICT4D") suggests that big data technology can make important contributions but also present unique challenges to international development.[63][64] Advancements in big data analysis offer cost-effective opportunities to improve decision-making in critical development areas such as health care, employment, economic productivity, crime, security, and natural disaster and resource management.[65][66][67] Additionally, user-generated data offers new opportunities to give the unheard a voice.[68] However, longstanding challenges for developing regions such as inadequate technological infrastructure and economic and human resource scarcity exacerbate existing concerns with big data such as privacy, imperfect methodology, and interoperability issues.[65] The challenge of "big data for development"[65] is currently evolving toward the application of this data through machine learning, known as "artificial intelligence for development (AI4D).[69]


A major practical application of big data for development has been "fighting poverty with data".[70] In 2015, Blumenstock and colleagues estimated predicted poverty and wealth from mobile phone metadata [71] and in 2016 Jean and colleagues combined satellite imagery and machine learning to predict poverty.[72] Using digital trace data to study the labor market and the digital economy in Latin America, Hilbert and colleagues [73][74] argue that digital trace data has several benefits such as:

  • Thematic coverage: including areas that were previously difficult or impossible to measure
  • Geographical coverage: our international sources provided sizable and comparable data for almost all countries, including many small countries that usually are not included in international inventories
  • Level of detail: providing fine-grained data with many interrelated variables, and new aspects, like network connections
  • Timeliness and timeseries: graphs can be produced within days of being collected


At the same time, working with digital trace data instead of traditional survey data does not eliminate the traditional challenges involved when working in the field of international quantitative analysis. Priorities change, but the basic discussions remain the same. Among the main challenges are:

  • Representativeness. While traditional development statistics is mainly concerned with the representativeness of random survey samples, digital trace data is never a random sample.[75]
  • Generalizability. While observational data always represents this source very well, it only represents what it represents, and nothing more. While it is tempting to generalize from specific observations of one platform to broader settings, this is often very deceptive.
  • Harmonization. Digital trace data still requires international harmonization of indicators. It adds the challenge of so-called "data-fusion", the harmonization of different sources.
  • Data overload. Analysts and institutions are not used to effectively deal with a large number of variables, which is efficiently done with interactive dashboards. Practitioners still lack a standard workflow that would allow researchers, users and policymakers to efficiently and effectively.[73]


Big data analytics was used in healthcare by providing personalized medicine and prescriptive analytics, clinical risk intervention and predictive analytics, waste and care variability reduction, automated external and internal reporting of patient data, standardized medical terms and patient registries.[76][77][78][79] Some areas of improvement are more aspirational than actually implemented. The level of data generated within healthcare systems is not trivial. With the added adoption of mHealth, eHealth and wearable technologies the volume of data will continue to increase. This includes electronic health record data, imaging data, patient generated data, sensor data, and other forms of difficult to process data. There is now an even greater need for such environments to pay greater attention to data and information quality.[80] "Big data very often means 'dirty data' and the fraction of data inaccuracies increases with data volume growth." Human inspection at the big data scale is impossible and there is a desperate need in health service for intelligent tools for accuracy and believability control and handling of information missed.[81] While extensive information in healthcare is now electronic, it fits under the big data umbrella as most is unstructured and difficult to use.[82] The use of big data in healthcare has raised significant ethical challenges ranging from risks for individual rights, privacy and autonomy, to transparency and trust.[83]

Big data in health research is particularly promising in terms of exploratory biomedical research, as data-driven analysis can move forward more quickly than hypothesis-driven research.[84] Then, trends seen in data analysis can be tested in traditional, hypothesis-driven follow up biological research and eventually clinical research.

A related application sub-area, that heavily relies on big data, within the healthcare field is that of computer-aided diagnosis in medicine. [85] For instance, for epilepsy monitoring it is customary to create 5 to 10 GB of data daily. [86] Similarly, a single uncompressed image of breast tomosynthesis averages 450 MB of data. [87] These are just a few of the many examples where computer-aided diagnosis uses big data. For this reason, big data has been recognized as one of the seven key challenges that computer-aided diagnosis systems need to overcome in order to reach the next level of performance. [88]


A McKinsey Global Institute study found a shortage of 1.5 million highly trained data professionals and managers[51] and a number of universities[89][better source needed] including University of Tennessee and UC Berkeley, have created masters programs to meet this demand. Private boot camps have also developed programs to meet that demand, including free programs like The Data Incubator or paid programs like General Assembly.[90] In the specific field of marketing, one of the problems stressed by Wedel and Kannan[91] is that marketing has several sub domains (e.g., advertising, promotions, product development, branding) that all use different types of data.


To understand how the media uses big data, it is first necessary to provide some context into the mechanism used for media process. It has been suggested by Nick Couldry and Joseph Turow that practitioners in media and advertising approach big data as many actionable points of information about millions of individuals. The industry appears to be moving away from the traditional approach of using specific media environments such as newspapers, magazines, or television shows and instead taps into consumers with technologies that reach targeted people at optimal times in optimal locations. The ultimate aim is to serve or convey, a message or content that is (statistically speaking) in line with the consumer's mindset. For example, publishing environments are increasingly tailoring messages (advertisements) and content (articles) to appeal to consumers that have been exclusively gleaned through various data-mining activities.[92]

  • Targeting of consumers (for advertising by marketers)[93]
  • Data capture
  • Data journalism: publishers and journalists use big data tools to provide unique and innovative insights and infographics.

Channel 4, the British public-service television broadcaster, is a leader in the field of big data and data analysis.[94]


Health insurance providers are collecting data on social "determinants of health" such as food and TV consumption, marital status, clothing size, and purchasing habits, from which they make predictions on health costs, in order to spot health issues in their clients. It is controversial whether these predictions are currently being used for pricing.[95]

Internet of things (IoT)

Big data and the IoT work in conjunction. Data extracted from IoT devices provides a mapping of device inter-connectivity. Such mappings have been used by the media industry, companies, and governments to more accurately target their audience and increase media efficiency. The IoT is also increasingly adopted as a means of gathering sensory data, and this sensory data has been used in medical,[96] manufacturing[97] and transportation[98] contexts.

Kevin Ashton, the digital innovation expert who is credited with coining the term,[99] defines the Internet of things in this quote: "If we had computers that knew everything there was to know about things—using data they gathered without any help from us—we would be able to track and count everything, and greatly reduce waste, loss, and cost. We would know when things needed replacing, repairing, or recalling, and whether they were fresh or past their best."

Information technology

Especially since 2015, big data has come to prominence within business operations as a tool to help employees work more efficiently and streamline the collection and distribution of information technology (IT). The use of big data to resolve IT and data collection issues within an enterprise is called IT operations analytics (ITOA).[100] By applying big data principles into the concepts of machine intelligence and deep computing, IT departments can predict potential issues and prevent them.[100] ITOA businesses offer platforms for systems management that bring data silos together and generate insights from the whole of the system rather than from isolated pockets of data.

  • The Integrated Joint Operations Platform (IJOP, 一体化联合作战平台) is used by the government to monitor the population, particularly Uyghurs.[101] Biometrics, including DNA samples, are gathered through a program of free physicals.[102]
  • By 2020, China plans to give all its citizens a personal "social credit" score based on how they behave.[103] The Social Credit System, now being piloted in a number of Chinese cities, is considered a form of mass surveillance which uses big data analysis technology.[104][105]
  • Big data analysis was tried out for the BJP to win the 2014 Indian General Election.[106]
  • The Indian government uses numerous techniques to ascertain how the Indian electorate is responding to government action, as well as ideas for policy augmentation.
  • Personalized diabetic treatments can be created through GlucoMe's big data solution.[107]

Examples of uses of big data in public services:

  • Data on prescription drugs: by connecting origin, location and the time of each prescription, a research unit was able to exemplify and examine the considerable delay between the release of any given drug, and a UK-wide adaptation of the National Institute for Health and Care Excellence guidelines. This suggests that new or most up-to-date drugs take some time to filter through to the general patient.[citation needed][108]
  • Joining up data: a local authority blended data about services, such as road gritting rotas, with services for people at risk, such as Meals on Wheels. The connection of data allowed the local authority to avoid any weather-related delay.[109]

United States

  • In 2012, the Obama administration announced the Big Data Research and Development Initiative, to explore how big data could be used to address important problems faced by the government.[110] The initiative is composed of 84 different big data programs spread across six departments.[111]
  • Big data analysis played a large role in Barack Obama's successful 2012 re-election campaign.[112]
  • The United States Federal Government owns five of the ten most powerful supercomputers in the world.[113][114]
  • The Utah Data Center has been constructed by the United States National Security Agency. When finished, the facility will be able to handle a large amount of information collected by the NSA over the Internet. The exact amount of storage space is unknown, but more recent sources claim it will be on the order of a few exabytes.[115][116][117] This has posed security concerns regarding the anonymity of the data collected.[118]


  • Walmart handles more than 1 million customer transactions every hour, which are imported into databases estimated to contain more than 2.5 petabytes (2560 terabytes) of data—the equivalent of 167 times the information contained in all the books in the US Library of Congress.[6]
  • Windermere Real Estate uses location information from nearly 100 million drivers to help new home buyers determine their typical drive times to and from work throughout various times of the day.[119]
  • FICO Card Detection System protects accounts worldwide.[120]


  • The Large Hadron Collider experiments represent about 150 million sensors delivering data 40 million times per second. There are nearly 600 million collisions per second. After filtering and refraining from recording more than 99.99995%[121] of these streams, there are 1,000 collisions of interest per second.[122][123][124]
    • As a result, only working with less than 0.001% of the sensor stream data, the data flow from all four LHC experiments represents 25 petabytes annual rate before replication (as of 2012[update]). This becomes nearly 200 petabytes after replication.
    • If all sensor data were recorded in LHC, the data flow would be extremely hard to work with. The data flow would exceed 150 million petabytes annual rate, or nearly 500 exabytes per day, before replication. To put the number in perspective, this is equivalent to 500 quintillion (5×1020) bytes per day, almost 200 times more than all the other sources combined in the world.
  • The Square Kilometre Array is a radio telescope built of thousands of antennas. It is expected to be operational by 2024. Collectively, these antennas are expected to gather 14 exabytes and store one petabyte per day.[125][126] It is considered one of the most ambitious scientific projects ever undertaken.[127]
  • When the Sloan Digital Sky Survey (SDSS) began to collect astronomical data in 2000, it amassed more in its first few weeks than all data collected in the history of astronomy previously. Continuing at a rate of about 200 GB per night, SDSS has amassed more than 140 terabytes of information.[6] When the Large Synoptic Survey Telescope, successor to SDSS, comes online in 2020, its designers expect it to acquire that amount of data every five days.[6]
  • Decoding the human genome originally took 10 years to process; now it can be achieved in less than a day. The DNA sequencers have divided the sequencing cost by 10,000 in the last ten years, which is 100 times cheaper than the reduction in cost predicted by Moore's law.[128]
  • The NASA Center for Climate Simulation (NCCS) stores 32 petabytes of climate observations and simulations on the Discover supercomputing cluster.[129][130]
  • Google's DNAStack compiles and organizes DNA samples of genetic data from around the world to identify diseases and other medical defects. These fast and exact calculations eliminate any "friction points", or human errors that could be made by one of the numerous science and biology experts working with the DNA. DNAStack, a part of Google Genomics, allows scientists to use the vast sample of resources from Google's search server to scale social experiments that would usually take years, instantly.[131][132]
  • 23andme's DNA database contains the genetic information of over 1,000,000 people worldwide.[133] The company explores selling the "anonymous aggregated genetic data" to other researchers and pharmaceutical companies for research purposes if patients give their consent.[134][135][136][137][138] Ahmad Hariri, professor of psychology and neuroscience at Duke University who has been using 23andMe in his research since 2009 states that the most important aspect of the company's new service is that it makes genetic research accessible and relatively cheap for scientists.[134] A study that identified 15 genome sites linked to depression in 23andMe's database lead to a surge in demands to access the repository with 23andMe fielding nearly 20 requests to access the depression data in the two weeks after publication of the paper.[139]
  • Computational fluid dynamics (CFD) and hydrodynamic turbulence research generate massive data sets. The Johns Hopkins Turbulence Databases (JHTDB) contains over 350 terabytes of spatiotemporal fields from Direct Numerical simulations of various turbulent flows. Such data have been difficult to share using traditional methods such as downloading flat simulation output files. The data within JHTDB can be accessed using "virtual sensors" with various access modes ranging from direct web-browser queries, access through Matlab, Python, Fortran and C programs executing on clients' platforms, to cut out services to download raw data. The data have been used in over 150 scientific publications.


Big data can be used to improve training and understanding competitors, using sport sensors. It is also possible to predict winners in a match using big data analytics.[140] Future performance of players could be predicted as well. Thus, players' value and salary is determined by data collected throughout the season.[141]

In Formula One races, race cars with hundreds of sensors generate terabytes of data. These sensors collect data points from tire pressure to fuel burn efficiency.[142] Based on the data, engineers and data analysts decide whether adjustments should be made in order to win a race. Besides, using big data, race teams try to predict the time they will finish the race beforehand, based on simulations using data collected over the season.[143]


  • eBay.com uses two data warehouses at 7.5 petabytes and 40PB as well as a 40PB Hadoop cluster for search, consumer recommendations, and merchandising.[144]
  • Amazon.com handles millions of back-end operations every day, as well as queries from more than half a million third-party sellers. The core technology that keeps Amazon running is Linux-based and as of 2005[update] they had the world's three largest Linux databases, with capacities of 7.8 TB, 18.5 TB, and 24.7 TB.[145]
  • Facebook handles 50 billion photos from its user base.[146] As of June 2017[update], Facebook reached 2 billion monthly active users.[147]
  • Google was handling roughly 100 billion searches per month as of August 2012[update].[148]


During the COVID-19 pandemic, big data was raised as a way to minimise the impact of the disease. Significant applications of big data included minimising the spread of the virus, case identification and development of medical treatment.[149]

Governments used big data to track infected people to minimise spread. Early adopters included China, Taiwan, South Korea, and Israel.[150][151][152]

Encrypted search and cluster formation in big data were demonstrated in March 2014 at the American Society of Engineering Education. Gautam Siwach engaged at Tackling the challenges of Big Data by MIT Computer Science and Artificial Intelligence Laboratory and Amir Esmailpour at the UNH Research Group investigated the key features of big data as the formation of clusters and their interconnections. They focused on the security of big data and the orientation of the term towards the presence of different types of data in an encrypted form at cloud interface by providing the raw definitions and real-time examples within the technology. Moreover, they proposed an approach for identifying the encoding technique to advance towards an expedited search over encrypted text leading to the security enhancements in big data.[153]

In March 2012, The White House announced a national "Big Data Initiative" that consisted of six federal departments and agencies committing more than $200 million to big data research projects.[154]

The initiative included a National Science Foundation "Expeditions in Computing" grant of $10 million over five years to the AMPLab[155] at the University of California, Berkeley.[156] The AMPLab also received funds from DARPA, and over a dozen industrial sponsors and uses big data to attack a wide range of problems from predicting traffic congestion[157] to fighting cancer.[158]

The White House Big Data Initiative also included a commitment by the Department of Energy to provide $25 million in funding over five years to establish the Scalable Data Management, Analysis and Visualization (SDAV) Institute,[159] led by the Energy Department's Lawrence Berkeley National Laboratory. The SDAV Institute aims to bring together the expertise of six national laboratories and seven universities to develop new tools to help scientists manage and visualize data on the department's supercomputers.

The U.S. state of Massachusetts announced the Massachusetts Big Data Initiative in May 2012, which provides funding from the state government and private companies to a variety of research institutions.[160] The Massachusetts Institute of Technology hosts the Intel Science and Technology Center for Big Data in the MIT Computer Science and Artificial Intelligence Laboratory, combining government, corporate, and institutional funding and research efforts.[161]

The European Commission is funding the two-year-long Big Data Public Private Forum through their Seventh Framework Program to engage companies, academics and other stakeholders in discussing big data issues. The project aims to define a strategy in terms of research and innovation to guide supporting actions from the European Commission in the successful implementation of the big data economy. Outcomes of this project will be used as input for Horizon 2020, their next framework program.[162]

The British government announced in March 2014 the founding of the Alan Turing Institute, named after the computer pioneer and code-breaker, which will focus on new ways to collect and analyze large data sets.[163]

At the University of Waterloo Stratford Campus Canadian Open Data Experience (CODE) Inspiration Day, participants demonstrated how using data visualization can increase the understanding and appeal of big data sets and communicate their story to the world.[164]

Computational social sciences – Anyone can use application programming interfaces (APIs) provided by big data holders, such as Google and Twitter, to do research in the social and behavioral sciences.[165] Often these APIs are provided for free.[165] Tobias Preis et al. used Google Trends data to demonstrate that Internet users from countries with a higher per capita gross domestic products (GDPs) are more likely to search for information about the future than information about the past. The findings suggest there may be a link between online behaviors and real-world economic indicators.[166][167][168] The authors of the study examined Google queries logs made by ratio of the volume of searches for the coming year (2011) to the volume of searches for the previous year (2009), which they call the "future orientation index".[169] They compared the future orientation index to the per capita GDP of each country, and found a strong tendency for countries where Google users inquire more about the future to have a higher GDP.

Tobias Preis and his colleagues Helen Susannah Moat and H. Eugene Stanley introduced a method to identify online precursors for stock market moves, using trading strategies based on search volume data provided by Google Trends.[170] Their analysis of Google search volume for 98 terms of varying financial relevance, published in Scientific Reports,[171] suggests that increases in search volume for financially relevant search terms tend to precede large losses in financial markets.[172][173][174][175][176][177][178]

Big data sets come with algorithmic challenges that previously did not exist. Hence, there is seen by some to be a need to fundamentally change the processing ways.[179]

The Workshops on Algorithms for Modern Massive Data Sets (MMDS) bring together computer scientists, statisticians, mathematicians, and data analysis practitioners to discuss algorithmic challenges of big data.[180] Regarding big data, such concepts of magnitude are relative. As it is stated "If the past is of any guidance, then today's big data most likely will not be considered as such in the near future."[85]

Sampling big data

A research question that is asked about big data sets is whether it is necessary to look at the full data to draw certain conclusions about the properties of the data or if is a sample is good enough. The name big data itself contains a term related to size and this is an important characteristic of big data. But sampling enables the selection of right data points from within the larger data set to estimate the characteristics of the whole population. In manufacturing different types of sensory data such as acoustics, vibration, pressure, current, voltage, and controller data are available at short time intervals. To predict downtime it may not be necessary to look at all the data but a sample may be sufficient. Big data can be broken down by various data point categories such as demographic, psychographic, behavioral, and transactional data. With large sets of data points, marketers are able to create and use more customized segments of consumers for more strategic targeting.

There has been some work done in sampling algorithms for big data. A theoretical formulation for sampling Twitter data has been developed.[181]

Critiques of the big data paradigm come in two flavors: those that question the implications of the approach itself, and those that question the way it is currently done.[182] One approach to this criticism is the field of critical data studies.

Critiques of the big data paradigm

"A crucial problem is that we do not know much about the underlying empirical micro-processes that lead to the emergence of the[se] typical network characteristics of Big Data."[23] In their critique, Snijders, Matzat, and Reips point out that often very strong assumptions are made about mathematical properties that may not at all reflect what is really going on at the level of micro-processes. Mark Graham has leveled broad critiques at Chris Anderson's assertion that big data will spell the end of theory:[183] focusing in particular on the notion that big data must always be contextualized in their social, economic, and political contexts.[184] Even as companies invest eight- and nine-figure sums to derive insight from information streaming in from suppliers and customers, less than 40% of employees have sufficiently mature processes and skills to do so. To overcome this insight deficit, big data, no matter how comprehensive or well analyzed, must be complemented by "big judgment", according to an article in the Harvard Business Review.[185]

Much in the same line, it has been pointed out that the decisions based on the analysis of big data are inevitably "informed by the world as it was in the past, or, at best, as it currently is".[65] Fed by a large number of data on past experiences, algorithms can predict future development if the future is similar to the past.[186] If the system's dynamics of the future change (if it is not a stationary process), the past can say little about the future. In order to make predictions in changing environments, it would be necessary to have a thorough understanding of the systems dynamic, which requires theory.[186] As a response to this critique Alemany Oliver and Vayre suggest to use "abductive reasoning as a first step in the research process in order to bring context to consumers' digital traces and make new theories emerge".[187] Additionally, it has been suggested to combine big data approaches with computer simulations, such as agent-based models[65] and complex systems. Agent-based models are increasingly getting better in predicting the outcome of social complexities of even unknown future scenarios through computer simulations that are based on a collection of mutually interdependent algorithms.[188][189] Finally, the use of multivariate methods that probe for the latent structure of the data, such as factor analysis and cluster analysis, have proven useful as analytic approaches that go well beyond the bi-variate approaches (e.g. contingency tables) typically employed with smaller data sets.

In health and biology, conventional scientific approaches are based on experimentation. For these approaches, the limiting factor is the relevant data that can confirm or refute the initial hypothesis.[190] A new postulate is accepted now in biosciences: the information provided by the data in huge volumes (omics) without prior hypothesis is complementary and sometimes necessary to conventional approaches based on experimentation.[191][192] In the massive approaches it is the formulation of a relevant hypothesis to explain the data that is the limiting factor.[193] The search logic is reversed and the limits of induction ("Glory of Science and Philosophy scandal", C. D. Broad, 1926) are to be considered.[citation needed]

Privacy advocates are concerned about the threat to privacy represented by increasing storage and integration of personally identifiable information; expert panels have released various policy recommendations to conform practice to expectations of privacy.[194] The misuse of big data in several cases by media, companies, and even the government has allowed for abolition of trust in almost every fundamental institution holding up society.[195]

Nayef Al-Rodhan argues that a new kind of social contract will be needed to protect individual liberties in the context of big data and giant corporations that own vast amounts of information, and that the use of big data should be monitored and better regulated at the national and international levels.[196] Barocas and Nissenbaum argue that one way of protecting individual users is by being informed about the types of information being collected, with whom it is shared, under what constraints and for what purposes.[197]

Critiques of the "V" model

The "V" model of big data is concerning as it centers around computational scalability and lacks in a loss around the perceptibility and understandability of information. This led to the framework of cognitive big data, which characterizes big data applications according to:[198]

  • Data completeness: understanding of the non-obvious from data
  • Data correlation, causation, and predictability: causality as not essential requirement to achieve predictability
  • Explainability and interpretability: humans desire to understand and accept what they understand, where algorithms do not cope with this
  • Level of automated decision-making: algorithms that support automated decision making and algorithmic self-learning

Critiques of novelty

Large data sets have been analyzed by computing machines for well over a century, including the US census analytics performed by IBM's punch-card machines which computed statistics including means and variances of populations across the whole continent. In more recent decades, science experiments such as CERN have produced data on similar scales to current commercial "big data". However, science experiments have tended to analyze their data using specialized custom-built high-performance computing (super-computing) clusters and grids, rather than clouds of cheap commodity computers as in the current commercial wave, implying a difference in both culture and technology stack.

Critiques of big data execution

Ulf-Dietrich Reips and Uwe Matzat wrote in 2014 that big data had become a "fad" in scientific research.[165] Researcher danah boyd has raised concerns about the use of big data in science neglecting principles such as choosing a representative sample by being too concerned about handling the huge amounts of data.[199] This approach may lead to results that have a bias in one way or another.[200] Integration across heterogeneous data resources—some that might be considered big data and others not—presents formidable logistical as well as analytical challenges, but many researchers argue that such integrations are likely to represent the most promising new frontiers in science.[201] In the provocative article "Critical Questions for Big Data",[202] the authors title big data a part of mythology: "large data sets offer a higher form of intelligence and knowledge [...], with the aura of truth, objectivity, and accuracy". Users of big data are often "lost in the sheer volume of numbers", and "working with Big Data is still subjective, and what it quantifies does not necessarily have a closer claim on objective truth".[202] Recent developments in BI domain, such as pro-active reporting especially target improvements in the usability of big data, through automated filtering of non-useful data and correlations.[203] Big structures are full of spurious correlations[204] either because of non-causal coincidences (law of truly large numbers), solely nature of big randomness[205] (Ramsey theory), or existence of non-included factors so the hope, of early experimenters to make large databases of numbers "speak for themselves" and revolutionize scientific method, is questioned.[206] Catherine Tucker has pointed to "hype" around big data, writing "By itself, big data is unlikely to be valuable." The article explains: "The many contexts where data is cheap relative to the cost of retaining talent to process it, suggests that processing skills are more important than data itself in creating value for a firm."[207]

Big data analysis is often shallow compared to analysis of smaller data sets.[208] In many big data projects, there is no large data analysis happening, but the challenge is the extract, transform, load part of data pre-processing.[208]

Big data is a buzzword and a "vague term",[209][210] but at the same time an "obsession"[210] with entrepreneurs, consultants, scientists, and the media. Big data showcases such as Google Flu Trends failed to deliver good predictions in recent years, overstating the flu outbreaks by a factor of two. Similarly, Academy awards and election predictions solely based on Twitter were more often off than on target. Big data often poses the same challenges as small data; adding more data does not solve problems of bias, but may emphasize other problems. In particular data sources such as Twitter are not representative of the overall population, and results drawn from such sources may then lead to wrong conclusions. Google Translate—which is based on big data statistical analysis of text—does a good job at translating web pages. However, results from specialized domains may be dramatically skewed. On the other hand, big data may also introduce new problems, such as the multiple comparisons problem: simultaneously testing a large set of hypotheses is likely to produce many false results that mistakenly appear significant. Ioannidis argued that "most published research findings are false"[211] due to essentially the same effect: when many scientific teams and researchers each perform many experiments (i.e. process a big amount of scientific data; although not with big data technology), the likelihood of a "significant" result being false grows fast – even more so, when only positive results are published. Furthermore, big data analytics results are only as good as the model on which they are predicated. In an example, big data took part in attempting to predict the results of the 2016 U.S. Presidential Election[212] with varying degrees of success.

Critiques of big data policing and surveillance

Big data has been used in policing and surveillance by institutions like law enforcement and corporations.[213] Due to the less visible nature of data-based surveillance as compared to traditional methods of policing, objections to big data policing are less likely to arise. According to Sarah Brayne's Big Data Surveillance: The Case of Policing,[214] big data policing can reproduce existing societal inequalities in three ways:

  • Placing people under increased surveillance by using the justification of a mathematical and therefore unbiased algorithm
  • Increasing the scope and number of people that are subject to law enforcement tracking and exacerbating existing racial overrepresentation in the criminal justice system
  • Encouraging members of society to abandon interactions with institutions that would create a digital trace, thus creating obstacles to social inclusion

If these potential problems are not corrected or regulated, the effects of big data policing may continue to shape societal hierarchies. Conscientious usage of big data policing could prevent individual level biases from becoming institutional biases, Brayne also notes.

  • Moneyball is a non-fiction book that explores how the Oakland Athletics used statistical analysis to outperform teams with larger budgets. In 2011 a film adaptation starring Brad Pitt was released.
  • In Captain America: The Winter Soldier, H.Y.D.R.A (disguised as S.H.I.E.L.D) develops helicarriers that use data to determine and eliminate threats over the globe.
  • In The Dark Knight, Batman uses a sonar device that can spy on all of Gotham City. The data is gathered from the mobile phones of people within the city.

  • Big data ethics
  • Big Data Maturity Model
  • Big memory
  • Data curation
  • Data defined storage
  • Data engineering
  • Data lineage
  • Data philanthropy
  • Data science
  • Datafication
  • Document-oriented database
  • In-memory processing
  • List of big data companies
  • Very large database
  • XLDB

  1. ^ Hilbert, Martin; López, Priscila (2011). "The World's Technological Capacity to Store, Communicate, and Compute Information". Science. 332 (6025): 60–65. Bibcode:2011Sci...332...60H. doi:10.1126/science.1200970. PMID 21310967. S2CID 206531385. Archived from the original on 14 April 2016. Retrieved 13 April 2016.
  2. ^ Breur, Tom (July 2016). "Statistical Power Analysis and the contemporary "crisis" in social sciences". Journal of Marketing Analytics. London, England: Palgrave Macmillan. 4 (2–3): 61–65. doi:10.1057/s41270-016-0001-3. ISSN 2050-3318.
  3. ^ a b "The 5 V's of big data". Watson Health Perspectives. 17 September 2016. Archived from the original on 18 January 2021. Retrieved 20 January 2021.
  4. ^ Cappa, Francesco; Oriani, Raffaele; Peruffo, Enzo; McCarthy, Ian (2021). "Big Data for Creating and Capturing Value in the Digitalized Environment: Unpacking the Effects of Volume, Variety, and Veracity on Firm Performance*". Journal of Product Innovation Management. 38 (1): 49–67. doi:10.1111/jpim.12545. ISSN 0737-6782. S2CID 225209179.
  5. ^ boyd, dana; Crawford, Kate (21 September 2011). "Six Provocations for Big Data". Social Science Research Network: A Decade in Internet Time: Symposium on the Dynamics of the Internet and Society. doi:10.2139/ssrn.1926431. S2CID 148610111. Archived from the original on 28 February 2020. Retrieved 12 July 2019.
  6. ^ a b c d e f g "Data, data everywhere". The Economist. 25 February 2010. Archived from the original on 27 May 2018. Retrieved 9 December 2012.
  7. ^ "Community cleverness required". Nature. 455 (7209): 1. September 2008. Bibcode:2008Natur.455....1.. doi:10.1038/455001a. PMID 18769385.
  8. ^ Reichman OJ, Jones MB, Schildhauer MP (February 2011). "Challenges and opportunities of open data in ecology". Science. 331 (6018): 703–5. Bibcode:2011Sci...331..703R. doi:10.1126/science.1197962. PMID 21311007. S2CID 22686503. Archived from the original on 19 October 2020. Retrieved 12 July 2019.
  9. ^ Hellerstein, Joe (9 November 2008). "Parallel Programming in the Age of Big Data". Gigaom Blog. Archived from the original on 7 October 2012. Retrieved 21 April 2010.
  10. ^ Segaran, Toby; Hammerbacher, Jeff (2009). Beautiful Data: The Stories Behind Elegant Data Solutions. O'Reilly Media. p. 257. ISBN 978-0-596-15711-1. Archived from the original on 12 May 2016. Retrieved 31 December 2015.
  11. ^ a b Hilbert M, López P (April 2011). "The world's technological capacity to store, communicate, and compute information" (PDF). Science. 332 (6025): 60–5. Bibcode:2011Sci...332...60H. doi:10.1126/science.1200970. PMID 21310967. S2CID 206531385. Archived (PDF) from the original on 19 August 2019. Retrieved 11 May 2019.
  12. ^ "IBM What is big data? – Bringing big data to the enterprise". ibm.com. Archived from the original on 24 August 2013. Retrieved 26 August 2013.
  13. ^ Reinsel, David; Gantz, John; Rydning, John (13 April 2017). "Data Age 2025: The Evolution of Data to Life-Critical" (PDF). seagate.com. Framingham, MA, US: International Data Corporation. Archived (PDF) from the original on 8 December 2017. Retrieved 2 November 2017.
  14. ^ "Global Spending on Big Data and Analytics Solutions Will Reach $215.7 Billion in 2021, According to a New IDC Spending Guide".
  15. ^ "Big data and business analytics revenue 2022".
  16. ^ "Global big data industry market size 2011-2027".
  17. ^ a b c https://www.mckinsey.com/~/media/mckinsey/business%20functions/mckinsey%20digital/our%20insights/big%20data%20the%20next%20frontier%20for%20innovation/mgi_big_data_exec_summary.pdf[bare URL PDF]
  18. ^ Oracle and FSN, "Mastering Big Data: CFO Strategies to Transform Insight into Opportunity" Archived 4 August 2013 at the Wayback Machine, December 2012
  19. ^ Jacobs, A. (6 July 2009). "The Pathologies of Big Data". ACMQueue. Archived from the original on 8 December 2015. Retrieved 21 April 2010.
  20. ^ Magoulas, Roger; Lorica, Ben (February 2009). "Introduction to Big Data". Release 2.0. Sebastopol CA: O'Reilly Media (11). Archived from the original on 2 November 2021. Retrieved 26 February 2021.
  21. ^ John R. Mashey (25 April 1998). "Big Data ... and the Next Wave of InfraStress" (PDF). Slides from invited talk. Usenix. Archived (PDF) from the original on 12 October 2016. Retrieved 28 September 2016.
  22. ^ Steve Lohr (1 February 2013). "The Origins of 'Big Data': An Etymological Detective Story". The New York Times. Archived from the original on 6 March 2016. Retrieved 28 September 2016.
  23. ^ a b Snijders, C.; Matzat, U.; Reips, U.-D. (2012). "'Big Data': Big gaps of knowledge in the field of Internet". International Journal of Internet Science. 7: 1–5. Archived from the original on 23 November 2019. Retrieved 13 April 2013.
  24. ^ Dedić, N.; Stanier, C. (2017). "Towards Differentiating Business Intelligence, Big Data, Data Analytics and Knowledge Discovery". Innovations in Enterprise Information Systems Management and Engineering. Lecture Notes in Business Information Processing. Vol. 285. Berlin; Heidelberg: Springer International Publishing. pp. 114–122. doi:10.1007/978-3-319-58801-8_10. ISBN 978-3-319-58800-1. ISSN 1865-1356. OCLC 909580101. Archived from the original on 27 November 2020. Retrieved 7 September 2019.
  25. ^ Everts, Sarah (2016). "Information Overload". Distillations. Vol. 2, no. 2. pp. 26–33. Archived from the original on 3 April 2019. Retrieved 22 March 2018.
  26. ^ Ibrahim; Targio Hashem, Abaker; Yaqoob, Ibrar; Badrul Anuar, Nor; Mokhtar, Salimah; Gani, Abdullah; Ullah Khan, Samee (2015). "big data" on cloud computing: Review and open research issues". Information Systems. 47: 98–115. doi:10.1016/j.is.2014.07.006.
  27. ^ Grimes, Seth. "Big Data: Avoid 'Wanna V' Confusion". InformationWeek. Archived from the original on 23 December 2015. Retrieved 5 January 2016.
  28. ^ Fox, Charles (25 March 2018). Data Science for Transport. Springer Textbooks in Earth Sciences, Geography and Environment. Springer. ISBN 9783319729527. Archived from the original on 1 April 2018. Retrieved 31 March 2018.
  29. ^ Kitchin, Rob; McArdle, Gavin (2016). "What makes Big Data, Big Data? Exploring the ontological characteristics of 26 datasets". Big Data & Society. 3: 1–10. doi:10.1177/2053951716631130. S2CID 55539845.
  30. ^ Balazka, Dominik; Rodighiero, Dario (2020). "Big Data and the Little Big Bang: An Epistemological (R)evolution". Frontiers in Big Data. 3: 31. doi:10.3389/fdata.2020.00031. hdl:1721.1/128865. PMC 7931920. PMID 33693404.
  31. ^ "avec focalisation sur Big Data & Analytique" (PDF). Bigdataparis.com. Archived from the original (PDF) on 25 February 2021. Retrieved 8 October 2017.
  32. ^ a b Billings S.A. "Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains". Wiley, 2013
  33. ^ "le Blog ANDSI » DSI Big Data". Andsi.fr. Archived from the original on 10 October 2017. Retrieved 8 October 2017.
  34. ^ Les Echos (3 April 2013). "Les Echos – Big Data car Low-Density Data ? La faible densité en information comme facteur discriminant – Archives". Lesechos.fr. Archived from the original on 30 April 2014. Retrieved 8 October 2017.
  35. ^ Sagiroglu, Seref (2013). "Big data: A review". 2013 International Conference on Collaboration Technologies and Systems (CTS): 42–47. doi:10.1109/CTS.2013.6567202. ISBN 978-1-4673-6404-1. S2CID 5724608.
  36. ^ Kitchin, Rob; McArdle, Gavin (17 February 2016). "What makes Big Data, Big Data? Exploring the ontological characteristics of 26 datasets". Big Data & Society. 3 (1): 205395171663113. doi:10.1177/2053951716631130.
  37. ^ Onay, Ceylan; Öztürk, Elif (2018). "A review of credit scoring research in the age of Big Data". Journal of Financial Regulation and Compliance. 26 (3): 382–405. doi:10.1108/JFRC-06-2017-0054. S2CID 158895306.
  38. ^ Big Data's Fourth V
  39. ^ "Measuring the Business Value of Big Data | IBM Big Data & Analytics Hub". www.ibmbigdatahub.com. Archived from the original on 28 January 2021. Retrieved 20 January 2021.
  40. ^ Kitchin, Rob; McArdle, Gavin (5 January 2016). "What makes Big Data, Big Data? Exploring the ontological characteristics of 26 datasets". Big Data & Society. 3 (1): 205395171663113. doi:10.1177/2053951716631130. ISSN 2053-9517.
  41. ^ "Survey: Biggest Databases Approach 30 Terabytes". Eweek.com. 8 November 2003. Retrieved 8 October 2017.
  42. ^ "LexisNexis To Buy Seisint For $775 Million". The Washington Post. Archived from the original on 24 July 2008. Retrieved 15 July 2004.
  43. ^ "The Washington Post". The Washington Post. Archived from the original on 19 October 2016. Retrieved 24 August 2017.
  44. ^ Bertolucci, Jeff "Hadoop: From Experiment To Leading Big Data Platform" Archived 23 November 2020 at the Wayback Machine, "Information Week", 2013. Retrieved on 14 November 2013.
  45. ^ Webster, John. "MapReduce: Simplified Data Processing on Large Clusters" Archived 14 December 2009 at the Wayback Machine, "Search Storage", 2004. Retrieved on 25 March 2013.
  46. ^ "Big Data Solution Offering". MIKE2.0. Archived from the original on 16 March 2013. Retrieved 8 December 2013.
  47. ^ "Big Data Definition". MIKE2.0. Archived from the original on 25 September 2018. Retrieved 9 March 2013.
  48. ^ Boja, C; Pocovnicu, A; Bătăgan, L. (2012). "Distributed Parallel Architecture for Big Data". Informatica Economica. 16 (2): 116–127.
  49. ^ "Solving Key Business Challenges With a Big Data Lake" (PDF). Hcltech.com. August 2014. Archived (PDF) from the original on 3 July 2017. Retrieved 8 October 2017.
  50. ^ "Method for testing the fault tolerance of MapReduce frameworks" (PDF). Computer Networks. 2015. Archived (PDF) from the original on 22 July 2016. Retrieved 13 April 2016.
  51. ^ a b Manyika, James; Chui, Michael; Bughin, Jaques; Brown, Brad; Dobbs, Richard; Roxburgh, Charles; Byers, Angela Hung (May 2011). "Big Data: The next frontier for innovation, competition, and productivity" (PDF). McKinsey Global Institute. Archived (PDF) from the original on 25 July 2021. Retrieved 22 May 2021. {{cite journal}}: Cite journal requires |journal= (help)
  52. ^ "Future Directions in Tensor-Based Computation and Modeling" (PDF). May 2009. Archived (PDF) from the original on 17 April 2018. Retrieved 4 January 2013.
  53. ^ Lu, Haiping; Plataniotis, K.N.; Venetsanopoulos, A.N. (2011). "A Survey of Multilinear Subspace Learning for Tensor Data" (PDF). Pattern Recognition. 44 (7): 1540–1551. Bibcode:2011PatRe..44.1540L. doi:10.1016/j.patcog.2011.01.004. Archived (PDF) from the original on 10 July 2019. Retrieved 21 January 2013.
  54. ^ Pllana, Sabri; Janciak, Ivan; Brezany, Peter; Wöhrer, Alexander (2016). "A Survey of the State of the Art in Data Mining and Integration Query Languages". 2011 14th International Conference on Network-Based Information Systems. 2011 International Conference on Network-Based Information Systems (NBIS 2011). IEEE Computer Society. pp. 341–348. arXiv:1603.01113. Bibcode:2016arXiv160301113P. doi:10.1109/NBiS.2011.58. ISBN 978-1-4577-0789-6. S2CID 9285984.
  55. ^ Wang, Yandong; Goldstone, Robin; Yu, Weikuan; Wang, Teng (October 2014). "Characterization and Optimization of Memory-Resident MapReduce on HPC Systems". 2014 IEEE 28th International Parallel and Distributed Processing Symposium. IEEE. pp. 799–808. doi:10.1109/IPDPS.2014.87. ISBN 978-1-4799-3800-1. S2CID 11157612.
  56. ^ L'Heureux, A.; Grolinger, K.; Elyamany, H. F.; Capretz, M. A. M. (2017). "Machine Learning With Big Data: Challenges and Approaches". IEEE Access. 5: 7776–7797. doi:10.1109/ACCESS.2017.2696365. ISSN 2169-3536.
  57. ^ Monash, Curt (30 April 2009). "eBay's two enormous data warehouses". Archived from the original on 31 March 2019. Retrieved 11 November 2010.
    Monash, Curt (6 October 2010). "eBay followup – Greenplum out, Teradata > 10 petabytes, Hadoop has some value, and more". Archived from the original on 31 March 2019. Retrieved 11 November 2010.
  58. ^ "Resources on how Topological Data Analysis is used to analyze big data". Ayasdi. Archived from the original on 3 March 2013. Retrieved 5 March 2013.
  59. ^ CNET News (1 April 2011). "Storage area networks need not apply". Archived from the original on 18 October 2013. Retrieved 17 April 2013.
  60. ^ Hilbert, Martin (2014). "What is the Content of the World's Technologically Mediated Information and Communication Capacity: How Much Text, Image, Audio, and Video?". The Information Society. 30 (2): 127–143. doi:10.1080/01972243.2013.873748. S2CID 45759014. Archived from the original on 24 June 2020. Retrieved 12 July 2019.
  61. ^ Rajpurohit, Anmol (11 July 2014). "Interview: Amy Gershkoff, Director of Customer Analytics & Insights, eBay on How to Design Custom In-House BI Tools". KDnuggets. Archived from the original on 14 July 2014. Retrieved 14 July 2014. Generally, I find that off-the-shelf business intelligence tools do not meet the needs of clients who want to derive custom insights from their data. Therefore, for medium-to-large organizations with access to strong technical talent, I usually recommend building custom, in-house solutions.
  62. ^ "The Government and big data: Use, problems and potential". Computerworld. 21 March 2012. Archived from the original on 15 September 2016. Retrieved 12 September 2016.
  63. ^ "White Paper: Big Data for Development: Opportunities & Challenges (2012) – United Nations Global Pulse". Unglobalpulse.org. Archived from the original on 1 June 2020. Retrieved 13 April 2016.
  64. ^ "WEF (World Economic Forum), & Vital Wave Consulting. (2012). Big Data, Big Impact: New Possibilities for International Development". World Economic Forum. Archived from the original on 1 June 2020. Retrieved 24 August 2012.
  65. ^ a b c d e Hilbert, M. (2016). Big Data for Development: A Review of Promises and Challenges. Development Policy Review, 34(1), 135–174. https://doi.org/10.1111/dpr.12142 Archived 1 June 2022 at the Wayback Machine free access: https://www.martinhilbert.net/big-data-for-development/ Archived 21 April 2021 at the Wayback Machine
  66. ^ "Elena Kvochko, Four Ways To talk About Big Data (Information Communication Technologies for Development Series)". worldbank.org. 4 December 2012. Archived from the original on 15 December 2012. Retrieved 30 May 2012.
  67. ^ "Daniele Medri: Big Data & Business: An on-going revolution". Statistics Views. 21 October 2013. Archived from the original on 17 June 2015. Retrieved 21 June 2015.
  68. ^ Tobias Knobloch and Julia Manske (11 January 2016). "Responsible use of data". D+C, Development and Cooperation. Archived from the original on 13 January 2017. Retrieved 11 January 2017.
  69. ^ Mann, S., & Hilbert, M. (2020). AI4D: Artificial Intelligence for Development. International Journal of Communication, 14(0), 21. https://www.martinhilbert.net/ai4d-artificial-intelligence-for-development/ Archived 22 April 2021 at the Wayback Machine
  70. ^ Blumenstock, J. E. (2016). Fighting poverty with data. Science, 353(6301), 753–754. https://doi.org/10.1126/science.aah5217 Archived 1 June 2022 at the Wayback Machine
  71. ^ Blumenstock, J., Cadamuro, G., & On, R. (2015). Predicting poverty and wealth from mobile phone metadata. Science, 350(6264), 1073–1076. https://doi.org/10.1126/science.aac4420 Archived 1 June 2022 at the Wayback Machine
  72. ^ Jean, N., Burke, M., Xie, M., Davis, W. M., Lobell, D. B., & Ermon, S. (2016). Combining satellite imagery and machine learning to predict poverty. Science, 353(6301), 790–794. https://doi.org/10.1126/science.aaf7894 Archived 1 June 2022 at the Wayback Machine
  73. ^ a b Hilbert, M., & Lu, K. (2020). The online job market trace in Latin America and the Caribbean (UN ECLAC LC/TS.2020/83; p. 79). United Nations Economic Commission for Latin America and the Caribbean. https://www.cepal.org/en/publications/45892-online-job-market-trace-latin-america-and-caribbean Archived 22 September 2020 at the Wayback Machine
  74. ^ UN ECLAC, (United Nations Economic Commission for Latin America and the Caribbean). (2020). Tracking the digital footprint in Latin America and the Caribbean: Lessons learned from using big data to assess the digital economy (Productive Development, Gender Affairs LC/TS.2020/12; Documentos de Proyecto). United Nations ECLAC. https://repositorio.cepal.org/handle/11362/45484 Archived 18 September 2020 at the Wayback Machine
  75. ^ Banerjee, Amitav; Chaudhury, Suprakash (2010). "Statistics without tears: Populations and samples". Industrial Psychiatry Journal. 19 (1): 60–65. doi:10.4103/0972-6748.77642. ISSN 0972-6748. PMC 3105563. PMID 21694795.
  76. ^ Huser V, Cimino JJ (July 2016). "Impending Challenges for the Use of Big Data". International Journal of Radiation Oncology, Biology, Physics. 95 (3): 890–894. doi:10.1016/j.ijrobp.2015.10.060. PMC 4860172. PMID 26797535.
  77. ^ Sejdic, Ervin; Falk, Tiago H. (4 July 2018). Signal Processing and Machine Learning for Biomedical Big Data. Sejdić, Ervin, Falk, Tiago H. [Place of publication not identified]. ISBN 9781351061216. OCLC 1044733829.
  78. ^ Raghupathi W, Raghupathi V (December 2014). "Big data analytics in healthcare: promise and potential". Health Information Science and Systems. 2 (1): 3. doi:10.1186/2047-2501-2-3. PMC 4341817. PMID 25825667.
  79. ^ Viceconti M, Hunter P, Hose R (July 2015). "Big data, big knowledge: big data for personalized healthcare" (PDF). IEEE Journal of Biomedical and Health Informatics. 19 (4): 1209–15. doi:10.1109/JBHI.2015.2406883. PMID 26218867. S2CID 14710821. Archived (PDF) from the original on 23 July 2018. Retrieved 21 September 2019.
  80. ^ O'Donoghue, John; Herbert, John (1 October 2012). "Data Management Within mHealth Environments: Patient Sensors, Mobile Devices, and Databases". Journal of Data and Information Quality. 4 (1): 5:1–5:20. doi:10.1145/2378016.2378021. S2CID 2318649.
  81. ^ Mirkes EM, Coats TJ, Levesley J, Gorban AN (August 2016). "Handling missing data in large healthcare dataset: A case study of unknown trauma outcomes". Computers in Biology and Medicine. 75: 203–16. arXiv:1604.00627. Bibcode:2016arXiv160400627M. doi:10.1016/j.compbiomed.2016.06.004. PMID 27318570. S2CID 5874067.
  82. ^ Murdoch TB, Detsky AS (April 2013). "The inevitable application of big data to health care". JAMA. 309 (13): 1351–2. doi:10.1001/jama.2013.393. PMID 23549579.
  83. ^ Vayena E, Salathé M, Madoff LC, Brownstein JS (February 2015). "Ethical challenges of big data in public health". PLOS Computational Biology. 11 (2): e1003904. Bibcode:2015PLSCB..11E3904V. doi:10.1371/journal.pcbi.1003904. PMC 4321985. PMID 25664461.
  84. ^ Copeland, CS (July–August 2017). "Data Driving Discovery" (PDF). Healthcare Journal of New Orleans: 22–27. Archived (PDF) from the original on 5 December 2019. Retrieved 5 December 2019.
  85. ^ a b Yanase J, Triantaphyllou E (2019). "A Systematic Survey of Computer-Aided Diagnosis in Medicine: Past and Present Developments". Expert Systems with Applications. 138: 112821. doi:10.1016/j.eswa.2019.112821. S2CID 199019309.
  86. ^ Dong X, Bahroos N, Sadhu E, Jackson T, Chukhman M, Johnson R, Boyd A, Hynes D (2013). "Leverage Hadoop framework for large scale clinical informatics applications". AMIA Joint Summits on Translational Science Proceedings. AMIA Joint Summits on Translational Science. 2013: 53. PMID 24303235.
  87. ^ Clunie D (2013). "Breast tomosynthesis challenges digital imaging infrastructure". Archived from the original on 24 February 2021. Retrieved 24 July 2019. {{cite journal}}: Cite journal requires |journal= (help)
  88. ^ Yanase J, Triantaphyllou E (2019). "The Seven Key Challenges for the Future of Computer-Aided Diagnosis in Medicine". International Journal of Medical Informatics. 129: 413–422. doi:10.1016/j.ijmedinf.2019.06.017. PMID 31445285. S2CID 198287435.
  89. ^ "Degrees in Big Data: Fad or Fast Track to Career Success". Forbes. Archived from the original on 3 March 2016. Retrieved 21 February 2016.
  90. ^ "NY gets new boot camp for data scientists: It's free but harder to get into than Harvard". Venture Beat. Archived from the original on 15 February 2016. Retrieved 21 February 2016.
  91. ^ Wedel, Michel; Kannan, PK (2016). "Marketing Analytics for Data-Rich Environments". Journal of Marketing. 80 (6): 97–121. doi:10.1509/jm.15.0413. S2CID 168410284.
  92. ^ Couldry, Nick; Turow, Joseph (2014). "Advertising, Big Data, and the Clearance of the Public Realm: Marketers' New Approaches to the Content Subsidy". International Journal of Communication. 8: 1710–1726.
  93. ^ "Why Digital Advertising Agencies Suck at Acquisition and are in Dire Need of an AI Assisted Upgrade". Ishti.org. 15 April 2018. Archived from the original on 12 February 2019. Retrieved 15 April 2018.
  94. ^ "Big data and analytics: C4 and Genius Digital". Ibc.org. Archived from the original on 8 October 2017. Retrieved 8 October 2017.
  95. ^ Marshall Allen (17 July 2018). "Health Insurers Are Vacuuming Up Details About You – And It Could Raise Your Rates". www.propublica.org. Archived from the original on 21 July 2018. Retrieved 21 July 2018.
  96. ^ "QuiO Named Innovation Champion of the Accenture HealthTech Innovation Challenge". Businesswire.com. 10 January 2017. Archived from the original on 22 March 2017. Retrieved 8 October 2017.
  97. ^ "A Software Platform for Operational Technology Innovation" (PDF). Predix.com. Archived from the original (PDF) on 22 March 2017. Retrieved 8 October 2017.
  98. ^ Z. Jenipher Wang (March 2017). "Big Data Driven Smart Transportation: the Underlying Story of IoT Transformed Mobility". Archived from the original on 4 July 2018. Retrieved 4 July 2018.
  99. ^ "That Internet Of Things Thing". 22 June 2009. Archived from the original on 2 May 2013. Retrieved 29 December 2017.
  100. ^ a b Solnik, Ray. "The Time Has Come: Analytics Delivers for IT Operations". Data Center Journal. Archived from the original on 4 August 2016. Retrieved 21 June 2016.
  101. ^ Josh Rogin (2 August 2018). "Ethnic cleansing makes a comeback – in China". No. Washington Post. Archived from the original on 31 March 2019. Retrieved 4 August 2018. Add to that the unprecedented security and surveillance state in Xinjiang, which includes all-encompassing monitoring based on identity cards, checkpoints, facial recognition and the collection of DNA from millions of individuals. The authorities feed all this data into an artificial-intelligence machine that rates people's loyalty to the Communist Party in order to control every aspect of their lives.
  102. ^ "China: Big Data Fuels Crackdown in Minority Region: Predictive Policing Program Flags Individuals for Investigations, Detentions". hrw.org. Human Rights Watch. 26 February 2018. Archived from the original on 21 December 2019. Retrieved 4 August 2018.
  103. ^ "Discipline and Punish: The Birth of China's Social-Credit System". The Nation. 23 January 2019. Archived from the original on 13 September 2019. Retrieved 8 August 2019.
  104. ^ "China's behavior monitoring system bars some from travel, purchasing property". CBS News. 24 April 2018. Archived from the original on 13 August 2019. Retrieved 8 August 2019.
  105. ^ "The complicated truth about China's social credit system". WIRED. 21 January 2019. Archived from the original on 8 August 2019. Retrieved 8 August 2019.
  106. ^ "News: Live Mint". Are Indian companies making enough sense of Big Data?. Live Mint. 23 June 2014. Archived from the original on 29 November 2014. Retrieved 22 November 2014.
  107. ^ "Israeli startup uses big data, minimal hardware to treat diabetes". The Times of Israel. Archived from the original on 1 March 2018. Retrieved 28 February 2018.
  108. ^ Singh, Gurparkash, Duane Schulthess, Nigel Hughes, Bart Vannieuwenhuyse, and Dipak Kalra (2018). "Real world big data for clinical research and drug development". Drug Discovery Today. 23 (3): 652–660. doi:10.1016/j.drudis.2017.12.002. PMID 29294362.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  109. ^ "Recent advances delivered by Mobile Cloud Computing and Internet of Things for Big Data applications: a survey". International Journal of Network Management. 11 March 2016. Archived from the original on 1 June 2022. Retrieved 14 September 2016.
  110. ^ Kalil, Tom (29 March 2012). "Big Data is a Big Deal". whitehouse.gov. Archived from the original on 10 January 2017. Retrieved 26 September 2012 – via National Archives.
  111. ^ Executive Office of the President (March 2012). "Big Data Across the Federal Government" (PDF). Office of Science and Technology Policy. Archived (PDF) from the original on 21 January 2017. Retrieved 26 September 2012 – via National Archives.
  112. ^ Lampitt, Andrew (14 February 2013). "The real story of how big data analytics helped Obama win". InfoWorld. Archived from the original on 5 July 2014. Retrieved 31 May 2014.
  113. ^ "November 2018 | TOP500 Supercomputer Sites". Archived from the original on 12 June 2020. Retrieved 13 November 2018.
  114. ^ Hoover, J. Nicholas. "Government's 10 Most Powerful Supercomputers". Information Week. UBM. Archived from the original on 16 October 2013. Retrieved 26 September 2012.
  115. ^ Bamford, James (15 March 2012). "The NSA Is Building the Country's Biggest Spy Center (Watch What You Say)". Wired. Archived from the original on 4 April 2012. Retrieved 18 March 2013.
  116. ^ "Groundbreaking Ceremony Held for $1.2 Billion Utah Data Center". National Security Agency Central Security Service. Archived from the original on 5 September 2013. Retrieved 18 March 2013.
  117. ^ Hill, Kashmir. "Blueprints of NSA's Ridiculously Expensive Data Center in Utah Suggest It Holds Less Info Than Thought". Forbes. Archived from the original on 29 March 2018. Retrieved 31 October 2013.
  118. ^ Smith, Gerry; Hallman, Ben (12 June 2013). "NSA Spying Controversy Highlights Embrace of Big Data". Huffington Post. Archived from the original on 19 July 2017. Retrieved 7 May 2018.
  119. ^ Wingfield, Nick (12 March 2013). "Predicting Commutes More Accurately for Would-Be Home Buyers". The New York Times. Archived from the original on 29 May 2013. Retrieved 21 July 2013.
  120. ^ "FICO® Falcon® Fraud Manager". Fico.com. Archived from the original on 11 November 2012. Retrieved 21 July 2013.
  121. ^ Alexandru, Dan. "Prof" (PDF). cds.cern.ch. CERN. Archived (PDF) from the original on 15 July 2017. Retrieved 24 March 2015.
  122. ^ "LHC Brochure, English version. A presentation of the largest and the most powerful particle accelerator in the world, the Large Hadron Collider (LHC), which started up in 2008. Its role, characteristics, technologies, etc. are explained for the general public". CERN-Brochure-2010-006-Eng. LHC Brochure, English version. CERN. Archived from the original on 19 March 2019. Retrieved 20 January 2013.
  123. ^ "LHC Guide, English version. A collection of facts and figures about the Large Hadron Collider (LHC) in the form of questions and answers". CERN-Brochure-2008-001-Eng. LHC Guide, English version. CERN. Archived from the original on 7 April 2020. Retrieved 20 January 2013.
  124. ^ Brumfiel, Geoff (19 January 2011). "High-energy physics: Down the petabyte highway". Nature. 469 (7330): 282–83. Bibcode:2011Natur.469..282B. doi:10.1038/469282a. PMID 21248814. S2CID 533166. Archived from the original on 30 July 2017. Retrieved 2 February 2012.
  125. ^ "IBM Research – Zurich" (PDF). Zurich.ibm.com. Archived from the original on 1 June 2022. Retrieved 8 October 2017.
  126. ^ "Future telescope array drives development of Exabyte processing". Ars Technica. 2 April 2012. Archived from the original on 31 March 2019. Retrieved 15 April 2015.
  127. ^ "Australia's bid for the Square Kilometre Array – an insider's perspective". The Conversation. 1 February 2012. Archived from the original on 12 October 2016. Retrieved 27 September 2016.
  128. ^ "Delort P., OECD ICCP Technology Foresight Forum, 2012" (PDF). Oecd.org. Archived (PDF) from the original on 19 June 2017. Retrieved 8 October 2017.
  129. ^ "NASA – NASA Goddard Introduces the NASA Center for Climate Simulation". Nasa.gov. Archived from the original on 3 April 2016. Retrieved 13 April 2016.
  130. ^ Webster, Phil. "Supercomputing the Climate: NASA's Big Data Mission". CSC World. Computer Sciences Corporation. Archived from the original on 4 January 2013. Retrieved 18 January 2013.
  131. ^ "These six great neuroscience ideas could make the leap from lab to market". The Globe and Mail. 20 November 2014. Archived from the original on 11 October 2016. Retrieved 1 October 2016.
  132. ^ "DNAstack tackles massive, complex DNA datasets with Google Genomics". Google Cloud Platform. Archived from the original on 24 September 2016. Retrieved 1 October 2016.
  133. ^ "23andMe – Ancestry". 23andme.com. Archived from the original on 18 December 2016. Retrieved 29 December 2016.
  134. ^ a b Potenza, Alessandra (13 July 2016). "23andMe wants researchers to use its kits, in a bid to expand its collection of genetic data". The Verge. Archived from the original on 29 December 2016. Retrieved 29 December 2016.
  135. ^ "This Startup Will Sequence Your DNA, So You Can Contribute To Medical Research". Fast Company. 23 December 2016. Archived from the original on 29 December 2016. Retrieved 29 December 2016.
  136. ^ Seife, Charles. "23andMe Is Terrifying, but Not for the Reasons the FDA Thinks". Scientific American. Archived from the original on 29 December 2016. Retrieved 29 December 2016.
  137. ^ Zaleski, Andrew (22 June 2016). "This biotech start-up is betting your genes will yield the next wonder drug". CNBC. Archived from the original on 29 December 2016. Retrieved 29 December 2016.
  138. ^ Regalado, Antonio. "How 23andMe turned your DNA into a $1 billion drug discovery machine". MIT Technology Review. Archived from the original on 29 December 2016. Retrieved 29 December 2016.
  139. ^ "23andMe reports jump in requests for data in wake of Pfizer depression study | FierceBiotech". fiercebiotech.com. 22 August 2016. Archived from the original on 29 December 2016. Retrieved 29 December 2016.
  140. ^ Admire Moyo (23 October 2015). "Data scientists predict Springbok defeat". itweb.co.za. Archived from the original on 22 December 2015. Retrieved 12 December 2015.
  141. ^ Regina Pazvakavambwa (17 November 2015). "Predictive analytics, big data transform sports". itweb.co.za. Archived from the original on 22 December 2015. Retrieved 12 December 2015.
  142. ^ Dave Ryan (13 November 2015). "Sports: Where Big Data Finally Makes Sense". huffingtonpost.com. Archived from the original on 22 December 2015. Retrieved 12 December 2015.
  143. ^ Frank Bi. "How Formula One Teams Are Using Big Data To Get The Inside Edge". Forbes. Archived from the original on 20 December 2015. Retrieved 12 December 2015.
  144. ^ Tay, Liz. "Inside eBay's 90PB data warehouse". ITNews. Archived from the original on 15 February 2016. Retrieved 12 February 2016.
  145. ^ Layton, Julia (25 January 2006). "Amazon Technology". Money.howstuffworks.com. Archived from the original on 28 February 2013. Retrieved 5 March 2013.
  146. ^ "Scaling Facebook to 500 Million Users and Beyond". Facebook.com. Archived from the original on 5 July 2013. Retrieved 21 July 2013.
  147. ^ Constine, Josh (27 June 2017). "Facebook now has 2 billion monthly users… and responsibility". TechCrunch. Archived from the original on 27 December 2020. Retrieved 3 September 2018.
  148. ^ "Google Still Doing at Least 1 Trillion Searches Per Year". Search Engine Land. 16 January 2015. Archived from the original on 15 April 2015. Retrieved 15 April 2015.
  149. ^ Haleem, Abid; Javaid, Mohd; Khan, Ibrahim; Vaishya, Raju (2020). "Significant Applications of Big Data in COVID-19 Pandemic". Indian Journal of Orthopaedics. 54 (4): 526–528. doi:10.1007/s43465-020-00129-z. PMC 7204193. PMID 32382166.
  150. ^ Manancourt, Vincent (10 March 2020). "Coronavirus tests Europe's resolve on privacy". Politico. Archived from the original on 20 March 2020. Retrieved 30 October 2020.
  151. ^ Choudhury, Amit Roy (27 March 2020). "Gov in the Time of Corona". Gov Insider. Archived from the original on 20 March 2020. Retrieved 30 October 2020.
  152. ^ Cellan-Jones, Rory (11 February 2020). "China launches coronavirus 'close contact detector' app". BBC. Archived from the original on 28 February 2020. Retrieved 30 October 2020.
  153. ^ Siwach, Gautam; Esmailpour, Amir (March 2014). Encrypted Search & Cluster Formation in Big Data (PDF). ASEE 2014 Zone I Conference. University of Bridgeport, Bridgeport, Connecticut, US. Archived from the original (PDF) on 9 August 2014. Retrieved 26 July 2014.
  154. ^ "Obama Administration Unveils "Big Data" Initiative:Announces $200 Million in New R&D Investments" (PDF). Office of Science and Technology Policy. Archived (PDF) from the original on 21 January 2017 – via National Archives.
  155. ^ "AMPLab at the University of California, Berkeley". Amplab.cs.berkeley.edu. Archived from the original on 6 May 2011. Retrieved 5 March 2013.
  156. ^ "NSF Leads Federal Efforts in Big Data". National Science Foundation (NSF). 29 March 2012. Archived from the original on 31 March 2019. Retrieved 6 April 2018.
  157. ^ Timothy Hunter; Teodor Moldovan; Matei Zaharia; Justin Ma; Michael Franklin; Pieter Abbeel; Alexandre Bayen (October 2011). Scaling the Mobile Millennium System in the Cloud. Archived from the original on 31 March 2019. Retrieved 2 November 2012.
  158. ^ David Patterson (5 December 2011). "Computer Scientists May Have What It Takes to Help Cure Cancer". The New York Times. Archived from the original on 30 January 2017. Retrieved 26 February 2017.
  159. ^ "Secretary Chu Announces New Institute to Help Scientists Improve Massive Data Set Research on DOE Supercomputers". energy.gov. Archived from the original on 3 April 2019. Retrieved 2 November 2012.
  160. ^ Young, Shannon (30 May 2012). "Mass. governor, MIT announce big data initiative". Boston.com. Archived from the original on 29 July 2021. Retrieved 29 July 2021.
  161. ^ "Big Data @ CSAIL". Bigdata.csail.mit.edu. 22 February 2013. Archived from the original on 30 March 2013. Retrieved 5 March 2013.
  162. ^ "Big Data Public Private Forum". cordis.europa.eu. 1 September 2012. Archived from the original on 9 March 2021. Retrieved 16 March 2020.
  163. ^ "Alan Turing Institute to be set up to research big data". BBC News. 19 March 2014. Archived from the original on 18 August 2021. Retrieved 19 March 2014.
  164. ^ "Inspiration day at University of Waterloo, Stratford Campus". betakit.com/. Archived from the original on 26 February 2014. Retrieved 28 February 2014.
  165. ^ a b c Reips, Ulf-Dietrich; Matzat, Uwe (2014). "Mining "Big Data" using Big Data Services". International Journal of Internet Science. 1 (1): 1–8. Archived from the original on 14 August 2014. Retrieved 14 August 2014.
  166. ^ Preis T, Moat HS, Stanley HE, Bishop SR (2012). "Quantifying the advantage of looking forward". Scientific Reports. 2: 350. Bibcode:2012NatSR...2E.350P. doi:10.1038/srep00350. PMC 3320057. PMID 22482034.
  167. ^ Marks, Paul (5 April 2012). "Online searches for future linked to economic success". New Scientist. Archived from the original on 8 April 2012. Retrieved 9 April 2012.
  168. ^ Johnston, Casey (6 April 2012). "Google Trends reveals clues about the mentality of richer nations". Ars Technica. Archived from the original on 7 April 2012. Retrieved 9 April 2012.
  169. ^ Tobias Preis (24 May 2012). "Supplementary Information: The Future Orientation Index is available for download" (PDF). Archived (PDF) from the original on 17 January 2013. Retrieved 24 May 2012.
  170. ^ Philip Ball (26 April 2013). "Counting Google searches predicts market movements". Nature. doi:10.1038/nature.2013.12879. S2CID 167357427. Archived from the original on 27 September 2013. Retrieved 9 August 2013.
  171. ^ Preis T, Moat HS, Stanley HE (2013). "Quantifying trading behavior in financial markets using Google Trends". Scientific Reports. 3: 1684. Bibcode:2013NatSR...3E1684P. doi:10.1038/srep01684. PMC 3635219. PMID 23619126.
  172. ^ Nick Bilton (26 April 2013). "Google Search Terms Can Predict Stock Market, Study Finds". The New York Times. Archived from the original on 2 June 2013. Retrieved 9 August 2013.
  173. ^ Christopher Matthews (26 April 2013). "Trouble With Your Investment Portfolio? Google It!". Time. Archived from the original on 21 August 2013. Retrieved 9 August 2013.
  174. ^ Philip Ball (26 April 2013). "Counting Google searches predicts market movements". Nature. doi:10.1038/nature.2013.12879. S2CID 167357427. Archived from the original on 27 September 2013. Retrieved 9 August 2013.
  175. ^ Bernhard Warner (25 April 2013). "'Big Data' Researchers Turn to Google to Beat the Markets". Bloomberg Businessweek. Archived from the original on 23 July 2013. Retrieved 9 August 2013.
  176. ^ Hamish McRae (28 April 2013). "Hamish McRae: Need a valuable handle on investor sentiment? Google it". The Independent. London. Archived from the original on 25 July 2018. Retrieved 9 August 2013.
  177. ^ Richard Waters (25 April 2013). "Google search proves to be new word in stock market prediction". Financial Times. Archived from the original on 1 June 2022. Retrieved 9 August 2013.
  178. ^ Jason Palmer (25 April 2013). "Google searches predict market moves". BBC. Archived from the original on 5 June 2013. Retrieved 9 August 2013.
  179. ^ E. Sejdić (March 2014). "Adapt current tools for use with big data". Nature. 507 (7492): 306.
  180. ^ Stanford. "MMDS. Workshop on Algorithms for Modern Massive Data Sets" Archived 4 December 2019 at the Wayback Machine.
  181. ^ Deepan Palguna; Vikas Joshi; Venkatesan Chakravarthy; Ravi Kothari & L. V. Subramaniam (2015). Analysis of Sampling Algorithms for Twitter. International Joint Conference on Artificial Intelligence.
  182. ^ Chris Kimble; Giannis Milolidakis (7 October 2015). "Big Data and Business Intelligence: Debunking the Myths". Global Business and Organizational Excellence. 35 (1): 23–34. arXiv:1511.03085. doi:10.1002/JOE.21642. ISSN 1932-2054. Wikidata Q56532925.
  183. ^ Chris Anderson (23 June 2008). "The End of Theory: The Data Deluge Makes the Scientific Method Obsolete". Wired. Archived from the original on 27 March 2014. Retrieved 5 March 2017.
  184. ^ Graham M. (9 March 2012). "Big data and the end of theory?". The Guardian. London. Archived from the original on 24 July 2013. Retrieved 14 December 2016.
  185. ^ Shah, Shvetank; Horne, Andrew; Capellá, Jaime (April 2012). "Good Data Won't Guarantee Good Decisions". Harvard Business Review. Archived from the original on 11 September 2012. Retrieved 8 September 2012.
  186. ^ a b Big Data requires Big Visions for Big Change. Archived 2 December 2016 at the Wayback Machine, Hilbert, M. (2014). London: TEDx UCL, x=independently organized TED talks
  187. ^ Alemany Oliver, Mathieu; Vayre, Jean-Sebastien (2015). "Big Data and the Future of Knowledge Production in Marketing Research: Ethics, Digital Traces, and Abductive Reasoning". Journal of Marketing Analytics. 3 (1): 5–13. doi:10.1057/jma.2015.1. S2CID 111360835.
  188. ^ Jonathan Rauch (1 April 2002). "Seeing Around Corners". The Atlantic. Archived from the original on 4 April 2017. Retrieved 5 March 2017.
  189. ^ Epstein, J. M., & Axtell, R. L. (1996). Growing Artificial Societies: Social Science from the Bottom Up. A Bradford Book.
  190. ^ "Delort P., Big data in Biosciences, Big Data Paris, 2012" (PDF). Bigdataparis.com. Archived from the original (PDF) on 30 July 2016. Retrieved 8 October 2017.
  191. ^ "Next-generation genomics: an integrative approach" (PDF). nature. July 2010. Archived (PDF) from the original on 13 August 2017. Retrieved 18 October 2016.
  192. ^ "Big Data in Biosciences". October 2015. Archived from the original on 1 June 2022. Retrieved 18 October 2016.
  193. ^ "Big data: are we making a big mistake?". Financial Times. 28 March 2014. Archived from the original on 30 June 2016. Retrieved 20 October 2016.
  194. ^ Ohm, Paul (23 August 2012). "Don't Build a Database of Ruin". Harvard Business Review. Archived from the original on 30 August 2012. Retrieved 29 August 2012.
  195. ^ Bond-Graham, Darwin (2018). "The Perspective on Big Data" Archived 9 November 2020 at the Wayback Machine. The Perspective.
  196. ^ Al-Rodhan, Nayef (16 September 2014). "The Social Contract 2.0: Big Data and the Need to Guarantee Privacy and Civil Liberties – Harvard International Review". Harvard International Review. Archived from the original on 13 April 2017. Retrieved 3 April 2017.
  197. ^ Barocas, Solon; Nissenbaum, Helen; Lane, Julia; Stodden, Victoria; Bender, Stefan; Nissenbaum, Helen (June 2014). Big Data's End Run around Anonymity and Consent. Cambridge University Press. pp. 44–75. doi:10.1017/cbo9781107590205.004. ISBN 9781107067356. S2CID 152939392.
  198. ^ Lugmayr, Artur; Stockleben, Bjoern; Scheib, Christoph; Mailaparampil, Mathew; Mesia, Noora; Ranta, Hannu; Lab, Emmi (1 June 2016). "A Comprehensive Survey On Big-Data Research and Its Implications – What is Really 'New' in Big Data? – It's Cognitive Big Data!". Archived from the original on 1 June 2022. Retrieved 30 December 2017. {{cite journal}}: Cite journal requires |journal= (help)
  199. ^ danah boyd (29 April 2010). "Privacy and Publicity in the Context of Big Data". WWW 2010 conference. Archived from the original on 22 October 2018. Retrieved 18 April 2011.
  200. ^ Katyal, Sonia K. (2019). "Artificial Intelligence, Advertising, and Disinformation". Advertising & Society Quarterly. 20 (4). doi:10.1353/asr.2019.0026. ISSN 2475-1790. S2CID 213397212. Archived from the original on 28 October 2020. Retrieved 18 November 2020.
  201. ^ Jones, MB; Schildhauer, MP; Reichman, OJ; Bowers, S (2006). "The New Bioinformatics: Integrating Ecological Data from the Gene to the Biosphere" (PDF). Annual Review of Ecology, Evolution, and Systematics. 37 (1): 519–544. doi:10.1146/annurev.ecolsys.37.091305.110031. Archived (PDF) from the original on 8 July 2019. Retrieved 19 September 2012.
  202. ^ a b Boyd, D.; Crawford, K. (2012). "Critical Questions for Big Data". Information, Communication & Society. 15 (5): 662–679. doi:10.1080/1369118X.2012.678878. hdl:10983/1320. S2CID 51843165.
  203. ^ Failure to Launch: From Big Data to Big Decisions Archived 6 December 2016 at the Wayback Machine, Forte Wares.
  204. ^ "15 Insane Things That Correlate with Each Other". Archived from the original on 27 June 2019. Retrieved 27 June 2019.
  205. ^ "Random structures & algorithms". Archived from the original on 27 June 2019. Retrieved 27 June 2019.
  206. ^ Cristian S. Calude, Giuseppe Longo, (2016), The Deluge of Spurious Correlations in Big Data, Foundations of Science
  207. ^ Anja Lambrecht and Catherine Tucker (2016) "The 4 Mistakes Most Managers Make with Analytics," Harvard Business Review, July 12. https://hbr.org/2016/07/the-4-mistakes-most-managers-make-with-analytics Archived 26 January 2022 at the Wayback Machine
  208. ^ a b Gregory Piatetsky (12 August 2014). "Interview: Michael Berthold, KNIME Founder, on Research, Creativity, Big Data, and Privacy, Part 2". KDnuggets. Archived from the original on 13 August 2014. Retrieved 13 August 2014.
  209. ^ Pelt, Mason (26 October 2015). ""Big Data" is an over used buzzword and this Twitter bot proves it". Siliconangle. Archived from the original on 30 October 2015. Retrieved 4 November 2015.
  210. ^ a b Harford, Tim (28 March 2014). "Big data: are we making a big mistake?". Financial Times. Archived from the original on 7 April 2014. Retrieved 7 April 2014.
  211. ^ Ioannidis JP (August 2005). "Why most published research findings are false". PLOS Medicine. 2 (8): e124. doi:10.1371/journal.pmed.0020124. PMC 1182327. PMID 16060722.
  212. ^ Lohr, Steve; Singer, Natasha (10 November 2016). "How Data Failed Us in Calling an Election". The New York Times. ISSN 0362-4331. Archived from the original on 25 November 2016. Retrieved 27 November 2016.
  213. ^ "How data-driven policing threatens human freedom". The Economist. 4 June 2018. ISSN 0013-0613. Archived from the original on 27 October 2019. Retrieved 27 October 2019.
  214. ^ Brayne, Sarah (29 August 2017). "Big Data Surveillance: The Case of Policing". American Sociological Review. 82 (5): 977–1008. doi:10.1177/0003122417725865. S2CID 3609838.

  • Peter Kinnaird; Inbal Talgam-Cohen, eds. (2012). "Big Data". XRDS: Crossroads, The ACM Magazine for Students. Vol. 19, no. 1. Association for Computing Machinery. ISSN 1528-4980. OCLC 779657714.
  • Jure Leskovec; Anand Rajaraman; Jeffrey D. Ullman (2014). Mining of massive datasets. Cambridge University Press. ISBN 9781107077232. OCLC 888463433.
  • Viktor Mayer-Schönberger; Kenneth Cukier (2013). Big Data: A Revolution that Will Transform how We Live, Work, and Think. Houghton Mifflin Harcourt. ISBN 9781299903029. OCLC 828620988.
  • Press, Gil (9 May 2013). "A Very Short History of Big Data". forbes.com. Jersey City, NJ. Retrieved 17 September 2016.
  • Stephens-Davidowitz, Seth (2017). Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are. Dey Street Books. ISBN 978-0062390851.
  • "Big Data: The Management Revolution". Harvard Business Review. October 2012.
  • O'Neil, Cathy (2017). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Broadway Books. ISBN 978-0553418835.
  •   Media related to Big data at Wikimedia Commons
  •   The dictionary definition of big data at Wiktionary

Retrieved from "https://en.wikipedia.org/w/index.php?title=Big_data&oldid=1111125151"

Page 2

23andMe, Inc. is a publicly held personal genomics and biotechnology company based in Sunnyvale, California. It is best known for providing a direct-to-consumer genetic testing service in which customers provide a saliva sample that is laboratory analysed, using single nucleotide polymorphism genotyping,[1] to generate reports relating to the customer's ancestry and genetic predispositions to health-related topics. The company's name is derived from the 23 pairs of chromosomes in a wild-type human cell.[2]

What is big data a collection of large complex data sets including structured and unstructured data that Cannot be analyzed using traditional database methods and tools?
23andMe, Inc.
What is big data a collection of large complex data sets including structured and unstructured data that Cannot be analyzed using traditional database methods and tools?

23andMe headquarters


Traded as

Nasdaq: MEIndustryBiotechnology
Genetic genealogyFoundedApril 2006; 16 years ago (2006-04)FoundersLinda Avey
Paul Cusenza
Anne WojcickiHeadquarters

Mountain View, California


United States

Key people

Anne Wojcicki (CEO)
Esther Dyson (board member)ProductsDirect-to-consumer personal genome testing
Mobile applicationServicesGenetic testing, genealogical DNA testing, medical researchRevenue
What is big data a collection of large complex data sets including structured and unstructured data that Cannot be analyzed using traditional database methods and tools?
US$259.92 million (2021)

Number of employees

683 (2019)Website23andme.com

The company had a previously fraught relationship with the United States Food and Drug Administration (FDA) due to its genetic health tests; as of October 2015, DNA tests ordered in the US include a revised health component, per FDA approval.[3][4] 23andMe has been selling a product with both ancestry and health-related components in Canada since October 2014,[5][6][7] and in the UK since December 2014.[8]

In 2007, 23andMe became the first company to begin offering autosomal DNA testing for ancestry, which all other major companies now use.[9] Its saliva-based direct-to-consumer genetic testing business was named "Invention of the Year" by Time in 2008.[10][11][12]

Linda Avey, Paul Cusenza and Anne Wojcicki founded 23andMe in 2006 to offer genetic testing and interpretation to individuals.[1] Investment documents from 2007 also suggest that 23andMe hoped to develop a database to pursue research efforts.[13] In 2007, Google invested $3.9 million in the company, along with Genentech, New Enterprise Associates, and Mohr Davidow Ventures.[14] Wojcicki and Google co-founder Sergey Brin were married at the time.[6]

In 2007, Cusenza left to join Nodal Exchange as CEO the following year.[15] Avey left in 2009 and co-founded Curious, Inc. in 2011.[16]

In 2012, 23andMe raised $50 million in a Series D venture round, almost doubling its capital of $52.6 million.[17] In 2015, 23andMe raised $115 million in a Series E offering, increasing capital to $241 million.[4][18][19]

In June 2017, 23andMe created a brand marketing advertisement featuring Gru from Despicable Me.[20] In 2018, the company launched advertisements narrated by Warren Buffett.[21]

In September 2017, it was rumored the company was raising another $200 million with a $1.5 billion valuation. As of that time the company raised $230 million since its inception.[22] Afterwards, it was reported the company raised $250 million, at a $1.75 billion valuation.[23]

On July 25, 2018, 23andMe announced it a partnership with GlaxoSmithKline to allow the pharmaceutical company to use test results from 5 million customers to design new drugs. GlaxoSmithKline invested $300 million in the company.[24] In January 2022, this partnership was extended until July 2023 with an additional $50 million payment from GlaxoSmithKline.[25]

In January 2020, 23andMe announced it would lay off about 100 of its employees.[26]

In July 2020, 23andMe and GlaxoSmithKline announced their partnership's first clinical trial: a joint asset being co-developed by the two companies for cancer treatment.[27]

In December 2020, the company raised around $82.5 million in a series F round, bringing the total raised over the years to over $850M. The post-money valuation was not reported.[28]

In February 2021, the company announced that it had entered into a definitive agreement to merge with Sir Richard Branson's special-purpose acquisition company, VG Acquisition Corp, in a $3.5 billion transaction.[29]

In June 2021, the company completed the merger with VG Acquisition Corp. The combined company was renamed to 23andMe Holding Co. and began trading on the Nasdaq stock exchange on June 17, 2021 under the ticker symbol “ME”.[30]

In October 2021, 23andMe announced that it would acquire Lemonaid Health, a telehealth company, for $400 million with the deal closing in November.[31][32]

The new genetic testing service and ability to map significant portions of the genome has raised controversial questions, including whether the results can be interpreted meaningfully and whether they will lead to genetic discrimination.[10][33] The regulatory environment for genetic testing companies has been uncertain, and anticipated risk-based regulation catering for different types of genetic tests has not yet materialized.[34][35][36]

State regulators

In 2008, the states of New York and California each provided notice to 23andMe and similar companies, that they needed to obtain a CLIA license in order to sell tests in those states.[10][37][38] By August 2008, 23andMe had received licenses that allow them to continue to do business in California.[39]


According to Anne Wojcicki, 23andMe had been in dialogue with the FDA since 2008.[36] In 2010, the FDA notified several genetic testing companies, including 23andMe, that their genetic tests are considered medical devices and federal approval is required to market them; a similar letter was sent to Illumina, which makes the instruments and chips used by 23andMe in providing its service.[34][40][41] 23andMe first submitted applications for FDA clearance in July and September 2012.[42]

In November 2013, the FDA published guidance on how it classified genetic analysis and testing services offered by companies using instruments and chips labelled for "research use only" and instruments and chips that had been approved for clinical use.[43]

At around the same time, after not hearing from 23andMe for six months, the FDA ordered 23andMe to stop marketing its saliva collection kit and personal genome service (PGS), as 23andMe had not demonstrated that they have "analytically or clinically validated the PGS for its intended uses" and that the "FDA is concerned about the public health consequences of inaccurate results from the PGS device".[42][44][45] As of December 2, 2013[update], 23andMe had stopped all advertisements for its PGS test but is still selling the product.[46][47] As of December 5, 2013[update], 23andMe was selling only raw genetic data and ancestry-related results.[3][48][49]

23andMe publicly responded to media reports on November 25, 2013, stating, "We recognize that we have not met the FDA's expectations regarding timeline and communication regarding our submission. Our relationship with the FDA is extremely important to us and we are committed to fully engaging with them to address their concerns."[50][51][52] CEO Anne Wojcicki subsequently posted an update on the 23andMe website, stating: "This is new territory for both 23andMe and the FDA. This makes the regulatory process with the FDA important because the work we are doing with the agency will help lay the groundwork for what other companies in this new industry do in the future. It will also provide important reassurance to the public that the process and science behind the service meet the rigorous standards required by those entrusted with the public's safety."[36]

On December 5, 2013, 23andMe announced that it had suspended health-related genetic tests for customers who purchased the test from November 22, 2013 in order to comply with the FDA warning letter, while undergoing regulatory review.[3][48][49]

In May 2014, it was reported that 23andMe was exploring alternative locations abroad, including Canada, Australia, and the United Kingdom, in which to offer its full genetic testing service.[53] 23andMe had been selling a product with both ancestry and health-related components in Canada since October 2014,[5][6][7] and in the UK since December 2014.[8]

In 2014, 23andMe submitted a 510(k) application to the FDA to market a carrier test for Bloom syndrome, which included data showing that 23andme's results were consistent and reliable and that the saliva collection kit and instructions were easy enough for people to use without making mistakes that might affect their results, and included citations to the scientific literature showing that the specific tests that 23andMe offered were associated with Blooms.[54][55] The FDA cleared the test in February 2015; in the clearance notice, the FDA said that it would not require similar applications for other carrier tests from 23andMe.[54][56] The FDA sent further clarification about regulation of the test to 23andMe on October 1, 2015.[57]

On October 21, 2015, 23andMe announced that it would begin marketing carrier tests in the US again.[4] Wojcicki said, "There was part of us that didn’t understand how the regulatory environment works" in regards to the distributed laboratory regulatory functions of FDA and Centers for Medicare and Medicaid Service (CMS).[58]

23andMe submitted a "de novo" application to the FDA to market tests that provide people with information about whether they have gene mutations or alleles that put them at risk for getting or having certain diseases; the applications included data showing that 23andMe's results were consistent and reliable. In April 2017, the FDA approved the applications for ten tests: late-onset Alzheimer's disease, Parkinson's disease, celiac disease, hereditary thrombophilia, alpha-1 antitrypsin deficiency, glucose-6-phosphate dehydrogenase deficiency, early-onset of dystonia, factor XI deficiency, and Gaucher's disease.[59][60] The FDA also said that it intended to exempt further 23andMe genetic risk tests from the needing 510(k) applications, and it clarified that it was only approving genetic risk tests, not diagnostic tests.[59]

In March 2018, the FDA approved another de novo application from the company, this one for a DTC test for three specific BRCA mutations that are the most common BRCA mutations in people of Ashkenazi descent; they are not however the most common BRCA mutations in the general population, and the test is only for three of the approximately 1,000 known mutations.[61] These mutations increase the risk of breast and ovarian cancer in women, and the risk of breast and prostate cancer in men.[62]


A 23andMe 2021 genome testing kit


A 23andMe 2013 genome testing kit

23andMe began offering direct-to-consumer genetic testing in November 2007. Customers provide a saliva testing sample that is partially single nucleotide polymorphism (SNP) genotyped and results are posted online.[1][63][64] In 2008, when the company was offering estimates of "predisposition for more than 90 traits and conditions ranging from baldness to blindness", Time magazine named the product "Invention of the Year".[10]

After the sample is received by the lab, the DNA is extracted from the saliva and amplified so that there is enough to be genotyped. The DNA is then cut into small pieces, and applied to a glass microarray chip, which has many microscopic beads applied to its surface. Each bead has a gene probe on it that matches the DNA of one of the many variants the company test for. If the sample has a match in the microarray, the sequences will hybridize, or bind together, letting researchers know that this variant is present in the customer's genome by a fluorescent label located on the probes. Tens of thousands of variants are tested out of the 10 to 30 million located in the entire genome. These matches are then compiled into a report that is supplied to the customer, allowing them to know if the variants associated with certain diseases, such as Parkinson's, celiac and Alzheimer's, are present in their own genome.[65]

Uninterpreted raw genetic data may be downloaded by customers.[33] This provides customers with the ability to choose one of the 23 chromosomes, as well as mitochondrial DNA, and see which base is located in certain positions in genes, and see how these compare to other common variants. Customers who bought tests with an ancestry-related component have online access to genealogical DNA test results and tools, including a relative-matching database. Customers can also view their mitochondrial haplogroup (maternal) and, if they are male or a relative shared a patriline that has also been tested, Y chromosome haplogroup (paternal). US customers who bought tests with a health-related component and received health-related results before November 22, 2013 have online access to an assessment of inherited traits and genetic disorder risks.[3][66][67] Health-related results for US customers who purchased the test from November 22, 2013 were suspended until late 2015 while undergoing an FDA regulatory review.[4][48][49] Customers who bought tests from 23andMe's Canadian and UK locations have access to some, but not all, health-related results.[5][8]

As of February 2018, 23andMe has genotyped over 3,000,000 individuals.[68] FDA marketing restrictions reduced customer growth rates.[69]

23andMe is commonly used for donor conceived people to find their biological siblings and in some cases their sperm or egg donor.[70]

Product changes

In late 2009, 23andMe split its genotyping service into three products with different prices: an Ancestry Edition, a Health edition, and a Complete Edition.[71] This decision was reversed a year later, when the different products were recombined.[34] In late 2010, the company introduced a monthly subscription fee for updates based on new medical research findings.[34][72] The subscription model proved unpopular with customers and was eliminated in mid-2012.[73]

23andMe only sold raw genetic data and ancestry-related results in the US due to FDA restrictions from November 22, 2013 until October 21, 2015,[3][48][49] when it announced that it would resume providing health information in the form of carrier status and wellness reports with FDA approval.[74] Wojcicki said they still plan to report on disease risk, subject to future FDA approval.[4]

The price of the full direct-to-consumer testing service in the US reduced from $999 in 2007 to $399 in 2008[75] and to $99 in 2012,[17] and was effectively being sold as a loss leader in order to build a valuable customer database.[33][76][77] In October 2015, the US price was raised to $199.[74] In September 2016, an ancestry-only version was once again offered at a lower price of $99 with an option to upgrade to include the health component for an additional $125 later.[78]

The initial price of the product sold in Canada from October 2014, which includes health-related results, was CA$99.[5][6] The initial price of the product sold in the UK from December 2014, which includes health-related results, was £125.[79]

In February 2018, 23andMe announced that its ancestry reporting would tell people what country they were from, not just what region, and increased the number of regions by 120. Like other companies, it still lacked data about Asia and Africa, which the African Genetics Program (launched in October 2016 with a grant from the US National Institutes of Health) will rectify by recruiting sub-Saharan Africans to increase the genomic data on racial and ethnic minorities.[80][81] Building off of the African Genetics Program, the Global Genetics Program was also announced in February 2018. This program aims to increase the genomic data of 61 underrepresented countries in their database by providing free tests to individuals that have all 4 grandparents from one of the countries. In April 2018, 23andMe announced the Populations Collaboration Program, which sets up formal collaborations between the company and researchers that are investigating underrepresented countries.[82]

Additional services

Since October 1, 2020, the company offers a new service called "23andMe+", priced at $29/year, for the customers of the "Health + Ancestry" service, who completed genotyping on version 5 of the microarray chip used by the company. The new service makes available additional reports on health and pharmacogenetics, and commits to provide ongoing new reports and features.[83]

Lemonaid Health acquisition

At the end of 2021 23andMe acquired leading digital healthcare company Lemonaid Health for $400m to "...give patients and healthcare providers better information about health risks and treatment". Paul Johnson, CEO and co-founder of Lemonaid Health became COO of the 23andMe consumer business.[84][85][86]

Instrument and chip versions

Up until 2010, Illumina only sold instruments that were labeled "for research use only"; in early 2010, Illumina obtained FDA approval for its BeadXpress system to be used in clinical tests.[87][88]

In June 2020, 23andMe published results from a study that claimed that people with type O blood may be at lower risk of catching COVID-19. Out of more than 750,000 participants, those with type O blood were 9–18% less likely to contract the virus, while those who had been exposed were 13–26% less likely to test positive. The study is ongoing and has not been peer-reviewed.[89][90][91]

Some customers comparing 23andMe ancestry results to other genomic and ancestry testing companies have received differing results, possibly due to human error, or the differing analysis of the extracted DNA due to overrepresentation of one country or region over another in the database.[92] Ancestry results are based on the amount of confidence the company has that the DNA is from a specific region, leading to both specific countries when the confidence is high, and broad regions when the confidence is low. This can lead to surprising results due to specific countries getting masked by low confidence in the DNA.[93] In August 2018, the company said it was broadening its coverage of Africa and East Asia.[94] The possibility of false positives also adds to customer confusion and unnecessary concerns when interpreting results.[95]

2019 research from the University of Southampton used the company as an example of direct-to-consumer tests that emphasize "breadth over detail", in one case only checking a few variants of a particular cancer-causing gene instead of the possible thousands, and said that such tests were generally unreliable.[96]

In 2019, identical twin sisters Charlsie and Carly Agro both took DNA tests from 23andMe, AncestryDNA, MyHeritage, FamilyTreeDNA, and LivingDNA, and found that the results from one sister did not match the results of the other sister.[97] While Charlsie's test resulted in 38% Italian, 28% Eastern European, 15% Balkan, around 3% Broadly European, 13% Others, and 2.6% French and German, Carly's test resulted in 37% Italian, 25% Eastern European, 14% Balkan, 13% Broadly European, and 12% Others. Carly's test did not detect French and German ancestry, while Charlsie's did. Carly's test did however discover Polish ancestry, under the Eastern European category, which came up undetected in Charlsie's results. Charlsie Agro had asked Mark Gerstein, a computational biologist from Yale University, to analyze both her and her twin sister's raw DNA data beforehand; his team stated that the results should be identical, as their DNA were "statistically the same".[98] While the results were not identical, they were very similar to each other, which raised the question as to whether or not DNA ancestry tests are 100% accurate. Although the machines that process human DNA are highly accurate, errors can occur, and human errors can also cause unexpected results.

Questions have been raised since at least 2013 as to whether the company can obtain informed consent through its web-based interactions with people who want to submit samples for sequencing.[99][100]

The company collects not only genetic and personal information from customers who order DNA tests, but also data about other web behavior information that 23andMe captures through the use of its website, products, software, cookies, and through its smartphone app.[101][102] A combination of several individual policies within the terms of service and privacy policy (cookies, disclosure of aggregate data, targeted advertising) makes 23andMe a valuable data mine for third parties such as health insurance companies, pharmaceutical companies, advertising companies, biotechnology companies, law enforcement, or other interested parties.[103][104][105] People may not actually be aware of how the company uses the data, and there are always risks of data breaches.[106][107]

United States

Depending on which state an individual resides in, 23andMe must follow that state's laws regarding privacy and disclosing information. Since 23andMe is not a medical provider the company does not have to abide by standard privacy policies that must be followed at a doctor's office, such as the Health Insurance Portability and Accountability Act (HIPAA).[107] Research by Deloitte has shown that only 9% of consumers actually read the terms and conditions, and research from ProPrivacy concluded that only 1% of consumers read the policies, which suggests that consent to be included in research may have been given without full knowledge of the permissions being given.[108] In addition, 23andMe's privacy policy can be confusing for consumers to understand.[109] Despite confusion, 23andMe’s informed consent practices are IRB-approved. Several sections of the privacy policy allows data to be disclosed to third parties, regardless whether the consent is signed:

Section 4(b) "We permit third party advertising networks and providers to collect Web-Behavior Information regarding the use of our Services to help us to deliver targeted online advertisements ('ads') to you."[101]

Section 4(c): "Regardless of your consent status, we may also include your data in aggregate data that we disclose to third-party research partners who will not publish that information in a scientific journal."[110]

Section 4(d): "We may share some or all of your Personal Information with other companies under common ownership or control of 23andMe, which may include our subsidiaries, our corporate parent, or any other subsidiaries owned by our corporate parent in order to provide you better service and improve user experience."[101]

The Genetic Information Nondiscrimination Act (GINA) protects a person against discrimination based on genetic information by their employer(s) or insurance companies in most situations. However, GINA does not extend to discrimination based on genetic information for long-term care or disability-insurance providers.

European Union

Effective as of 25 May 2018, 23andMe must abide by the General Data Protection Regulation (GDPR).[111][112] The GDPR is a set of rules/regulations that helps an individual take control of their data information that is collected, used and stored digitally or in a structured filing system on paper, and restricts a company's use of personal data.[112] The regulation also applies to companies who offer products/services outside of the EU.[112]

Medical research

Aggregated customer data is studied by scientific researchers employed by 23andMe for research on inherited disorders; rights to use customers' data is also sold to pharmaceutical and biotechnology companies for use in their research.[4][33][113] The company also collaborates with academic and government scientists.[114][69] In July 2012, 23andMe acquired the startup CureTogether, a crowdsourced treatment ratings website with data on over 600 medical conditions.[115] 23andMe has an optional consent that enables the individual's genetic information to be included in medical research that may be published in a scientific journal. However, if an individual chooses not to consent for their 'personal information' to be used, their 'genetic information' and 'self-reported information' may still be used and shared with the company's third party service providers.[101][105]

In 2010, 23andMe said that it was able to use its database to validate work published by the NIH: identifying mutations in the gene that codes for glucocerebrosidase as a risk factor for Parkinson's disease.[114]

In 2015, 23andMe made a business decision to pursue drug discovery themselves, under the direction of former Genentech executive Richard Scheller.[4][116] One of their main focuses is Parkinson's disease, and they are utilizing the 23andMe database to search for rare variants associated with Parkinson's in the hope of developing a drug for the disease. The company also set up research agreements with the pharmaceutical company Pfizer to explore the genetic causes of inflammatory bowel disease, namely ulcerative colitis and Crohn's disease.[117][118]

In 2016, a project that the company was developing to provide customers with next generation sequencing was ended, because of the fear that the results would be too complicated or vague to fit into the company's goal of providing useful information, both quickly and precisely, directly to consumers, according to Wojcicki.[119] Also in 2016, 23andMe used self-reported data from customers to locate 17 genetic loci that seem to be associated with depression.[120]

In 2017, 23andMe, the Lundbeck pharmaceutical company, and the Milken Institute think tank started collaborations to focus on psychiatric disorders, such as bipolar disorder and major depression. Their goals are to determine the genetic roots of such disorders, as well as pursue drug discovery in those areas.[121]

Use by law enforcement

23andMe does not have a history of allowing its genetic profiles to be used by law enforcement to solve crimes, believing that it violates users' privacy.[122][123] As of February 15, 2019, 23andMe has denied data requests by law enforcement on six separate occasions.[123] However, according to section 8 of the terms of service, "23andMe is free to preserve and disclose any and all Personal Information to law enforcement agencies or others if required to do so by law or in the good faith belief that such preservation or disclosure is reasonably necessary."[124] The information 23andMe collects from users is very personal. Overall, the privacy policies are very clear on their website, with a table of contents, easy structure and language, easy access and precise explanations of how data is collected, used, and stored. It also explains how users can access their data, change and delete their data and how to contact them with any concerns. Although these privacy policies are straightforward and easy to understand, there are some questionable sections and components. For instance, on Ancestry.com, they use genetic information not only to provide users with their DNA kit results, but also to conduct “scientific, statistical, and historical research” and “to better understand population and ethnicity-related health, wellness, aging, or physical conditions”. They ask users for permission before their data is used for research, but many users do not pay attention to the privacy policy and do not realize what they are agreeing to. Over 5 million 23andMe customers have opted in for their data being used in research. [125] [126] In at least one case, 23andMe was used to identify the remains of a crime victim.

  • Ancestry.com
  • Family Tree DNA
  • Genographic Project
  • Living DNA
  • MyHeritage

  1. ^ a b c Goetz, Thomas (November 17, 2007). "23AndMe Will Decode Your DNA for $1,000. Welcome to the Age of Genomics". Wired. Archived from the original on March 12, 2014. Retrieved April 5, 2012.
  2. ^ "Fact Sheet". 23andMe. Archived from the original on January 19, 2013. Retrieved November 27, 2013.
  3. ^ a b c d e Herper, Matthew (December 5, 2013). "23andMe Stops Offering Genetic Tests Related to Health". Forbes. Archived from the original on February 9, 2014. Retrieved December 6, 2013.
  4. ^ a b c d e f g Pollack, Andrew (October 21, 2015). "23andMe Will Resume Giving Users Health Data". The New York Times. ISSN 0362-4331. Archived from the original on March 4, 2016. Retrieved October 21, 2015.
  5. ^ a b c d Ubelacker, Sheryl (October 1, 2014). "U.S. company launches genetic health and ancestry info service in Canada". Winnipeg Free Press. The Canadian Press. Retrieved October 7, 2014.
  6. ^ a b c d Hansen, Darah (October 2, 2014). "5Q: Anne Wojcicki, CEO 23andMe on knowing your DNA data (and being married to the boss of Google)". Yahoo Finance Canada. Retrieved October 7, 2014.
  7. ^ a b "23andme genetic testing service raises ethical questions". CBC News. October 2, 2014. Retrieved October 7, 2014.
  8. ^ a b c Roberts, Michelle; Rincon, Paul (December 2, 2014). "Controversial DNA test comes to UK". BBC News. Retrieved December 2, 2014.
  9. ^ "Autosomal SNP comparison chart - ISOGG Wiki". isogg.org. Retrieved September 4, 2022.
  10. ^ a b c d Hamilton, Anita (October 29, 2008). "Best Inventions of 2008". Time. Archived from the original on November 2, 2008. Retrieved April 5, 2012.
  11. ^ "About Us". 23andMe.
  12. ^ Janzen, Tim; et al. "Family Tree DNA Learning Center". Autosomal DNA testing comparison chart. International Society of Genetic Genealogy Wiki. Gene by Gene.
  13. ^ "All those 23andMe spit tests were part of a bigger plan firm".
  14. ^ "Google invests in genetics firm". BBC News. May 22, 2007. Retrieved June 28, 2007.
  15. ^ "Board Of Directors". Nodal Exchange. Archived from the original on December 3, 2013. Retrieved November 27, 2013.
  16. ^ "Curious: We've got questions". Archived from the original on February 25, 2014. Retrieved April 7, 2014.
  17. ^ a b Tsotsis, Alexia (December 11, 2012). "Another $50M Richer, 23andMe Drops Its Price To $99 Permanently. But Will The Average Dude Buy In?". TechCrunch. AOL. Retrieved December 12, 2012.
  18. ^ Chen, Caroline (October 14, 2015). "23andMe Funding Values Genetics Startup at $1.1 Billion". Bloomberg Business. Retrieved October 25, 2015.
  19. ^ "Notice of Exempt Offering of Securities". U.S. Securities and Exchange Commission. Retrieved July 11, 2015.
  20. ^ Luttner, Kathryn. "23andMe partners with 'Despicable Me 3' for first movie partnership" Campaign, June 2, 2017. Retrieved April 22, 2018
  21. ^ Jordon, Steve. "Warren Buffett narrates 23andMe commercial". Omaha.com.
  22. ^ Lynley, Matthew; Roof, Katie (September 6, 2017). "23andMe hits $1.5B pre-money valuation in latest huge funding round". Techcrunch.
  23. ^ Prashad, Spencer; Srikanthan, Shan (April 9, 2018). "23andMe: Building a Genetically-Sound Company". Ivey Business Review.
  24. ^ Fox, Maggie (July 25, 2018). "Drug giant Glaxo teams up with DNA testing company 23andMe". NBC News. Retrieved July 26, 2018.
  25. ^ "23andMe is getting $50M from GSK to extend their research partnership". www.bizjournals.com. Retrieved May 8, 2022.
  26. ^ Farr, Christina (January 23, 2020). "23andMe lays off 100 people as DNA test sales decline, CEO says she was 'surprised' to see market turn". CNBC. Retrieved January 26, 2020.
  27. ^ "23andMe and GSK Begin First Clinical Trial with Cancer Therapy" (Press release). 23andMe. July 29, 2020. Retrieved August 3, 2020.
  28. ^ Etherington, Darrell (December 29, 2020). "23andMe raises $82.5 million in new funding". Techcrunch.
  29. ^ "23andMe Goes Public as $3.5 Billion Company With Branson Aid". Bloomberg. February 4, 2021. Retrieved February 4, 2021.
  30. ^ "23andMe Successfully Closes its Business Combination with VG Acquisition Corp". 23andMe. June 17, 2021.
  31. ^ John Commins (October 26, 2021). "23andME Will Spend $400M To Acquire Lemonaid Health". Health Leaders Media.
  32. ^ "23andMe Reports FY2022 Fourth Quarter and Full Year Financial Results" (Press release). May 26, 2022 – via AP News.
  33. ^ a b c d Jeffries, Adrianne (December 12, 2012). "Genes, patents, and big business: at 23andMe, are you the customer or the product?". The Verge. Archived from the original on January 2, 2014. Retrieved July 17, 2014.
  34. ^ a b c d Vorhaus, Dan (November 23, 2010). "A Thanksgiving Tradition: 23andMe Repackages Product, Raises Prices". Genomics Law Report. Robinson Bradshaw & Hinson. Archived from the original on December 3, 2013. Retrieved November 29, 2013.
  35. ^ Greely, Hank (November 25, 2013). "The FDA drops an anvil on 23andMe – now what?". Stanford University. Archived from the original on November 29, 2013. Retrieved November 29, 2013. FDA had promised a risk-based regulatory scheme, but we don’t know what it is.
  36. ^ a b c Wojcicki, Anne (November 26, 2013). "An Update Regarding The FDA's Letter to 23andMe". 23andMe. Retrieved November 27, 2013.
  37. ^ Langreth, Robert; Herper, Matthew (April 18, 2008). "States Crack Down On Online Gene Tests". Forbes.
  38. ^ Kincaid, Jason (June 18, 2008). "Cease And Desist: California Tries to Unravel 23andMe's Genetic Testing". TechCrunch.com – via The Washington Post.
  39. ^ Pollack, Andrew (August 20, 2008). "California Licenses 2 Companies to Offer Gene Services". The New York Times.
  40. ^ "FDA cracking down on genetic tests". NBC. June 11, 2010. Retrieved November 27, 2013.
  41. ^ Pollack, Andrew (June 11, 2010). "F.D.A. Faults 5 Companies on Genetic Tests". The New York Times.
  42. ^ a b "Inspections, Compliance, Enforcement, and Criminal Investigations – 23andMe, Inc". FDA. November 22, 2013. Retrieved November 25, 2013.
  43. ^ Malone, Bill (February 1, 2014). "A New Chapter in FDA Regulation". AACC.org – Clinical Laboratory News.
  44. ^ Perrone, Matthew (November 25, 2013). "FDA Tells 23andMe to Halt Sales of Genetic Test". ABC News. Retrieved November 25, 2013.
  45. ^ Gray, Tyler (November 25, 2013). "FDA To 23andMe Founder Anne Wojcicki: Stop Marketing $99 DNA Test Or Face Penalties". Fast Company. Retrieved November 25, 2013.
  46. ^ Garde, Damian (December 3, 2013). "23andMe pulls ads after FDA warning, but sales roll on". FierceMedicalDevices. FierceMarkets. Retrieved December 4, 2013.
  47. ^ del Castillo, Michael (December 3, 2013). "Calm down about 23andMe, the media is getting it wrong". Upstart Business Journal. Retrieved December 5, 2013.
  48. ^ a b c d "23andMe, Inc. provides update on FDA regulatory review" (Press release). 23andMe. December 5, 2013. Retrieved December 6, 2013.
  49. ^ a b c d Fung, Brian (December 6, 2013). "Bowing again to the FDA, 23andMe stops issuing health-related genetic reports". The Washington Post. Retrieved December 6, 2013.
  50. ^ Khan, Razib (November 25, 2013). "The FDA's Battle With 23andMe Won't Mean Anything in the Long Run". Slate Magazine. Retrieved November 25, 2013.
  51. ^ Etherington, Darrell (November 25, 2013). "DNA Testing Startup 23andMe Hits A Snag As FDA Shuts Down Sales Of Home Testing Kit". TechCrunch. Retrieved November 25, 2013.
  52. ^ Young, Susan (November 25, 2013). "Updated: FDA Orders 23andMe to Stop Genetic Tests". Technology Review. Retrieved November 25, 2013.
  53. ^ Farr, Christina (May 6, 2014). "Gene startup 23andme casts eyes abroad after U.S. regulatory hurdle". Reuters. Archived from the original on May 27, 2014. Retrieved July 17, 2014.
  54. ^ a b "FDA permits marketing of first direct-to-consumer genetic carrier test for Bloom syndrome". FDA News Release. February 19, 2015.
  55. ^ "Device Classification under Section 513(f)(2)(de novo)". accessdata.fda.gov. FDA. Retrieved April 7, 2017.. 23andMe's Autosomal Recessive Carrier Screening Gene Mutation Detection System in FDA database
  56. ^ "23andMe Gets FDA Clearance to Market Bloom Syndrome Carrier Test Directly to Consumers". GenomeWeb. February 19, 2015.
  57. ^ "Letter re DEN140044" (PDF). FDA. October 1, 2015.. Decision Summary: Evaluation of DEN140044 revising February 2015 evaluation.
  58. ^ Bensinger, Greg (October 26, 2016). "Disconnect Between Silicon Valley and Regulators Over Health Technologies, 23andMe CEO Says". Wall Street Journal. ISSN 0099-9660. Retrieved November 23, 2016.
  59. ^ a b "FDA allows marketing of first direct-to-consumer tests that provide genetic risk information for certain conditions". FDA. April 6, 2017.
  60. ^ Kolata, Gina (April 6, 2017). "F.D.A. Will Allow 23andMe to Sell Genetic Tests for Disease Risk to Consumers". The New York Times.
  61. ^ Stallings, Erika (March 30, 2018). "Opinion | Allowing 23andMe To Test For BRCA May Do More Harm Than Good". HuffPost. Retrieved February 11, 2021.
  62. ^ "FDA authorizes, with special controls, direct-to-consumer test that reports three mutations in the BRCA breast cancer genes". FDA. March 6, 2018.
  63. ^ "Our Service: Genotyping Technology". 23andMe. Archived from the original on December 2, 2013. Retrieved November 27, 2013.
  64. ^ Hadly, Scott (November 18, 2013). "23andMe's New Custom Chip". 23andMe. Retrieved November 27, 2013.
  65. ^ Madara, Jason. "The extraction process: meet 23andme's Anne Wojcicki", WIRED UK, March 6, 2017. Retrieved April 17, 2018.
  66. ^ Baertlein, Lisa (November 20, 2007). "Google-backed 23andMe offers $999 DNA test". USA Today. Archived from the original on May 26, 2014. Retrieved April 5, 2012.
  67. ^ Swarns, Rachel L. (January 23, 2012). "With DNA Testing, Suddenly They Are Family". The New York Times. Archived from the original on July 17, 2014. Retrieved July 17, 2014.
  68. ^ Regalado, Antonio (February 12, 2018). "2017 was the year consumer DNA testing blew up". MIT Technology Review. Retrieved February 20, 2018.
  69. ^ a b Kiss, Jemima (March 9, 2014). "23andMe admits FDA order 'significantly slowed up' new customers". The Guardian. Archived from the original on March 16, 2014. Retrieved March 10, 2014.
  70. ^ Chung, Emily; Glanz, Melanie; Adhopia, Vik (January 25, 2018). "No more Mr. Anonymous for sperm donors". CBC News.
  71. ^ Wu, Shirley (November 13, 2009). "Get Just the Information You Want: 23andMe To Offer Separate Health and Ancestry Editions". 23andMe. Archived from the original on December 2, 2013. Retrieved November 29, 2013.
  72. ^ MacArthur, Daniel (November 24, 2010). "News from 23andMe: a bigger chip, a new subscription model and another discount drive". Wired. Archived from the original on June 29, 2013. Retrieved November 27, 2013.
  73. ^ "23andMe Eliminates Subscription Model". GenomeWeb Daily News. May 10, 2012. Retrieved November 27, 2013.
  74. ^ a b "23andMe reboots genetic health testing, now with FDA approval". Ars Technica. October 21, 2015. Retrieved October 21, 2015.
  75. ^ Pollack, Andrew (September 9, 2008). "DNA Profile Provider Is Cutting Its Prices". The New York Times. Retrieved December 27, 2017.
  76. ^ Hamilton, David (September 10, 2008). "23andMe's Price Cut: The End of Personal Genomics?". CBSNews.com. Archived from the original on July 17, 2014. Retrieved July 17, 2014.
  77. ^ Krol, Aaron (March 24, 2014). "23andMe Pursues Health Research in the Shadow of the FDA". Bio-IT World. Archived from the original on August 6, 2014. Retrieved July 17, 2014.
  78. ^ Ramsey, Lydia (September 22, 2016). "23andMe is now offering a $99 genetics test again – but it's very different from the original". Business Insider. Retrieved September 26, 2016.
  79. ^ Gibbs, Samuel (December 2, 2014). "DNA-screening test 23andMe launches in UK after US ban". The Guardian. Retrieved October 26, 2015.
  80. ^ Farr, Christina (February 28, 2018). "23andMe is getting more specific with its DNA ancestry tests, adding 120 new regions". CNBC. Retrieved March 29, 2018.
  81. ^ Hayden, Erika Check. "The rise and fall and rise again of 23andMe", Nature, October 11, 2017. Retrieved April 21, 2018.
  82. ^ Zhang, Sarah. "23andMe Wants Its DNA Data to Be Less White", The Atlantic, April 23, 2018.
  83. ^ "23andMe+: An ongoing approach to your genetics". October 1, 2020.
  84. ^ Reuters (October 21, 2021). "23andMe to buy telehealth firm Lemonaid for $400 mln". Reuters. Retrieved March 21, 2022.
  85. ^ "23andMe snaps up prescription platform Lemonaid Health for $400M". MobiHealthNews. October 22, 2021. Retrieved March 21, 2022.
  86. ^ HealthLeaders. "23andMe Will Spend $400M to Acquire Lemonaid Health". www.healthleadersmedia.com. Retrieved March 21, 2022.
  87. ^ Petrone, Justin (May 4, 2010). "FDA Clears Illumina's BeadXpress System for Clinical Use". GenomeWeb.
  88. ^ "510(k) Premarket Notification K093128". FDA. Retrieved April 7, 2017.
  89. ^ Kuchler, Hannah (June 8, 2020). "Study links blood type to lower risk of catching coronavirus". Financial Times. Retrieved June 9, 2020.
  90. ^ Brown, Kristen V (June 8, 2020). "23andMe Provides More Evidence That Blood Type Plays Role in Virus". Bloomberg. Retrieved June 9, 2020.
  91. ^ "23andMe provides more evidence that blood type plays role in COVID-19 virus". gulfnews.com. Retrieved June 9, 2020.
  92. ^ O'Rourke,Tanya. "How accurate are in-home DNA tests like Ancestry, 23andMe?", WCPO, December 12, 2017. Retrieved April 28, 2018.
  93. ^ Baron, Ethan. "DNA spit kits: 23andMe’s ancestry results ‘most confounding,’ new report says", Chicago Tribune, January 17, 2017. Retrieved April 28, 2018.
  94. ^ Dickey, Megan Rose (August 21, 2018). "23andMe's ancestry tools are getting better for people of color". TechCrunch. Retrieved August 24, 2018.
  95. ^ Mukherjee, Sy. "At-Home DNA Test Kits Are Blowing Up In Popularity. But Are They Accurate?", Fortune, April 2, 2018.
  96. ^ "Genetic tests: Experts urge caution over home testing". BBC News. October 17, 2019. Retrieved October 17, 2019.
  97. ^ Agro, Charlsie; Denne, Luke (January 18, 2019). "Twins get some 'mystifying' results when they put 5 DNA ancestry kits to the test". CBC News.
  98. ^ "Identical twins, identical DNA results—right? Not necessarily". www.advisory.com. Retrieved May 31, 2022.
  99. ^ Stoeklé, HC; Mamzer-Bruneel, MF; Vogt, G; Hervé, C (March 31, 2016). "23andMe: a new two-sided data-banking market model". BMC Medical Ethics. 17: 19. doi:10.1186/s12910-016-0101-9. PMC 4826522. PMID 27059184.
  100. ^ Allyse, M (February 2013). "23 and Me, We, and You: direct-to-consumer genetics, intellectual property, and informed consent". Trends in Biotechnology. 31 (2): 68–69. doi:10.1016/j.tibtech.2012.11.007. PMC 6309979. PMID 23237855.
  101. ^ a b c d "Privacy Policy". 23andMe. July 17, 2018.
  102. ^ "Behind at-home DNA testing companies sharing genetic data with third parties". CBS News. August 2, 2018.
  103. ^ Brown, Kristen (April 17, 2017). "23andMe Is Selling Your Data, But Not How You Think". Gizmodo. Retrieved May 1, 2019.
  104. ^ Drabiak, Katherine (February 26, 2016). "Read the Fine Print Before Sending Your Spit to 23andMe". The Hastings Center.
  105. ^ a b Brodwin, Erin (August 3, 2018). "DNA-testing companies like 23andMe sell your genetic data to drugmakers and other Silicon Valley startups". Business Insider.
  106. ^ Schulson, Michael (December 29, 2017). "Spit and Take". Slate.
  107. ^ a b Pitts, Peter (February 15, 2017). "The Privacy Delusions Of Genetic Testing". Forbes.
  108. ^ Wisgard, Alex (January 25, 2019). "23andMe sell your data – should you be worried?". dnatestingchoice.com.
  109. ^ Fox, Maggie (November 29, 2017). "What you're giving away with those home DNA tests". NBC News.
  110. ^ "Will the information I provide be shared with third parties?". 23andMe. Retrieved May 1, 2019.
  111. ^ "Data Protection - GDPR". 23andMe.
  112. ^ a b c "It's your Data - Take Control: Data Protection in the EU" (PDF). European Commission. 2018. Retrieved May 1, 2019.
  113. ^ McBride, Ryan (November 29, 2012). "23andMe sets stage for stronger ties with pharma". FierceBiotech. Archived from the original on August 8, 2013. Retrieved July 17, 2014.
  114. ^ a b Goetz, Thomas (June 22, 2010). "Sergey Brin's Search for a Parkinson's Cure". Wired. Vol. 18, no. 7. Archived from the original on July 17, 2014. Retrieved April 5, 2012.
  115. ^ "23andMe Makes First Acquisition, Nabs CureTogether To Double Down On Crowdsourced Genetic Research = Jul 11, 2012". TechCrunch. Retrieved February 18, 2015.
  116. ^ Herper, Matthew (March 12, 2015). "In Big Shift, 23andMe Will Invent Drugs Using Customer Data". Forbes. Retrieved October 28, 2015.
  117. ^ Herper, Matthew (January 6, 2015). "Surprise! With $60 Million Genentech Deal, 23andMe Has A Business Plan". Forbes.
  118. ^ Molten, Megan (September 13, 2017). "23andMe Is Digging Through Your Data for a Parkinson's Cure". Wired.
  119. ^ Pressman, Aaron. "Why 23andme Killed Its Next Generation Gene Sequencing Project", Fortune, October 27, 2016. Retrieved April 17, 2018.
  120. ^ Mullins, N; Lewis, CM (August 2017). "Genetics of Depression: Progress at Last". Current Psychiatry Reports. 19 (8): 43. doi:10.1007/s11920-017-0803-9. PMC 5486596. PMID 28608123.
  121. ^ Mukherjee, Sy (September 12, 2017). "23andMe Raises Another $250 Million – And Wants to Use Your Genetic Data to Make Drugs". Fortune.
  122. ^ Brown, Kristen V. (February 1, 2019). "A Major DNA-Testing Company Is Sharing Some of Its Data With the FBI. Here's Where It Draws the Line". Fortune.
  123. ^ a b "Transparency Report". 23andMe. February 15, 2019.
  124. ^ "Terms of Service". 23andMe. Retrieved May 1, 2019.
  125. ^ "How 23andMe is Monetizing Your DNA". January 5, 2015.
  126. ^ "Your DNA is a valuable asset, so why give it to ancestry websites for free? | Laura Spinney". TheGuardian.com. February 16, 2020.

  • 23andMe’s New Formula: Patient Consent. Antonio Regalado, MIT Technology Review
  • 23andMe, Ancestry DNA, Family Tree DNA raw data analysis tools in 2019. XCode, Medium Article
  • Official website  
  • Business data for 23andMe, Inc.:

    • Bloomberg
    • Google
    • Reuters
    • SEC filings
    • Yahoo!

Retrieved from "https://en.wikipedia.org/w/index.php?title=23andMe&oldid=1110480293"