The technologies have been developed since the industrial revolution. Its directions of development also become various not just for the transportation and production but to talk with get help from the machine not by physical means. There are several interesting technologies in hype cycle of emerging technologies 2015 which shows how these technologies comes true within certain time. For instances, buying the 3d printing is possible for cheaper price than today within 5 to 10 years, speech-to-speech translation can be used normally within 2 to 5 years so that we don’t have to worry about languages that we don’t know, and quantum computing which can do any task faster than the computer we are using now can be in plateau of productivity after more than 10 years. These products follow the steps: innovation trigger, peak of inflated expectations, trough of disillusionment, the slope of enlightenment and plateau of productivity.
The big data started with magazine article by fiction author Erik Larson on 1989. He first mentioned on this term for use of the data for the customers. Then the big data became scholarly interest and is published as paper in 1999. The data accumulated from the beginning of the civilization to the 2003 is less than the data that is now being created by two days. This is the statement made by Eric Schmidt in 2010. This shows how huge the increase in data accumulation these days. Then bad side of big data is warned by the Mckinsey report that the big data will bring big problems in privacy, security and intellectual property. To make the use of big data complete, they must solve these problems. There is also warn from Neil Postman that when people are only interesting in the non-serious matters by globalization of the interest of the people, their own cultures will be destroyed by this common interest. His second warn is that the world will be like “1984” by George Orwell or “Brave New World” by Aldous Huxley if the big data is abused.
There are three V’s that describe the property of the big data. First is volume that shows the size of the data. The internet of things is the example of volume. The great instance is Facebook. 2.5 billion items are shared in one day and people who use the Facebook make 2.7 billion likes in one day. Other activities from the Facebook are very huge and this size shows how the data is big for social network. The categorization of how the data is big, medium or small. If the data cannot be stored in memory bigger than 1 TB, then the size of data is big. If the size is between 1 TB and 10GB, the size of data is medium. If the size is less than 10 GB which fits to the memory in the laptop, the size is small. Therefore, there must be many cores for the big size data, and one core for the small. Second V is velocity. This shows how fast the data is shared and used. This speed can be observed in the algorithms that monitor the market changes within microsecond. The search engine also gives you the information very fast. The parallel processing is also possible so that two people who are far apart can do the work at the same time. The third V is variety which deals with analyzing the different types of data such as geospatial data, 3D data, audio, video and unstructured text. The four important features that describe variety are: data quality, accuracy, timeliness, relevancy, security.