Big data may be a field that treats ways to analyze, systematically extract information from, or otherwise deal with data sets that are overlarge or complex to be addressed by traditional data-processing application software.
In 2005 Roger Mougalas from O’Reilly Media coined the term Big Data for 1st time, only a year after they created the term Web 2.0. It refers to an outsized set of data that’s almost impossible to manage and process using traditional business intelligence tools.
Big data is assessed in three ways:
- Structured Data (such as transactions and financial records)
- Unstructured Data (such as text, documents and multimedia files)
- Semi-Structured Data (such as web server logs and streaming data from sensors)
These three terms, while technically applicable in all levels of analytics, are paramount in big data. Understanding where the raw data comes from and the way it’s to be treated before analyzing it only becomes more important when working with the quantity of massive data. Because there’s such a lot of it, information extraction must be efficient to form the endeavor worthwhile.
For more, please Follow us on LinkedIn or visit www.global-teq.com or send us your queries to info@global-teq.com