DescriptionData Anlaytic techniques have enhanced human ability to solve a lot of data related problems. It has opened a window through which an analyst can devise techniques to solve a given problem by just looking at the related data alone. Such techniques may not be directly visible to the programmer. Machine learning in short is programming computers to generate such algorithm to solve a given problem using example data or past experience. Numerous models or functions can be developed which fits a distribution of data points. The magnitude and breadth of this data plays a major role in determining which model fits best. In the current world, data is growing at an alarming rate both in terms of size and the information it conveys. Analytics on large-scale data requires very high execution time with limited resources. Implementing such techniques to generate different models on a single machine becomes tedious and time consuming. Implementing the same on a distributed network is a possible solution which is highly interesting and challenging. Data Analytic Algorithms, to create these models, should be carefully chosen on the basis of reliability and data integrity and at the same time should be easily distributable. The range of algorithms should be wide enough to incorporate the same analytics on all the data with required performance. An analysis of various metrics such as measure of scalability, accuracy, execution time helps classify individual algorithm’s impact on the required performance. The above metrics are highly dependent on the data, type of algorithm, and type of platform under consideration. This work proposes a unique system for Scalable Data Analytics and gives an analysis of its scalability. It also shows how this system implements a unique way of ensemble learning in useful time without compromising on accuracy. Finally it demonstrates scalability on two levels 1. Scalability at an individual algorithm level 2. Scalability at implementing ensemble learning for different use cases.