Schedules

Event Schedule

We are in the process of finalizing the sessions. Expect more than 60 talks at the summit. Please check back this page again.
Note: Bangalore edition will have 3 tracks and Hyderabad edition will have only 1 track

Expand All +
  • Day 1 Bangalore

    January 22, 2020

  • A package allows easy, transparent and cross-platform extension of the R base system. R packages are a comfortable way to maintain collections of R functions and data sets, distributing statistical methodology to others. The package system allows many more people to contribute to R while still enforcing some standards. But packages are also a convenient way to maintain private functions and share them with your colleagues like I have a private package of utility function, my working group has several “experimental” packages where we try out new things. This provides a transparent way of sharing code with co-workers, and the final transition from “playground” to production code is much easier. Another benefit of working with packages is that packages can be dynamically loaded and unloaded on runtime and hence only occupy memory when actually used. Installations and updates are fully automated and can be executed from inside or outside R. You will learn about how to create a package of your own and make it public for others to use.
    HALL 2: KNOWLEDGE TALK/ CASE PRESENTATION

  • The most common question marketers ask now is - “Did my ad campaign cause the user to convert and generate more revenue for my brand or would that have happened anyway?”.Also, targeting randomly generated customers make them suffer from huge costs and weak response. The complexity of the ad-tech ecosystem is constantly growing with brands running marketing activities across multiple channels, new targeting capabilities, and formats. Due to this, traditional digital measurement metrics like cost per click, return on investment, cost per conversion, etc. just scratch the surface while measuring the impact of marketing strategies. This measurement gap leads us to look at the incremental lift as a metric to measure the impact of a marketing strategy. Incrementality testing is a mathematical approach to differentiate between correlation and causation. We formulated different approaches to calculate incremental lift that can be implemented in the digital marketing ecosystem. Viewability is one of the methodologies that we are using for calculating incrementality in which we are measuring the effectiveness of an ad by comparing the users who are exposed to an ad versus users that are not exposed to an ad. Our methodologies cover concepts of test environment setup, randomization, bias handling, hypothesis testing, primary output and understanding different ways of using this output. We used this output for strategy planning and optimizations, helping us in achieving higher campaign efficiency. Having a set of different approaches to calculate incrementality gives us the flexibility to cater to a wide range of test cases having different setup challenges and restrictions.
    HALL 2: KNOWLEDGE TALK/ CASE PRESENTATION

  • Day 1 Hyderabad

    January 30, 2020

  • We haven't found any session that matches you're criteria
  • Day 2 Bangalore

    January 23, 2020

  • Tensorflow is by far the most popular deep learning library – open sourced by Google. In a short period of time it has tremendously grown in popularity compared to the other libraries like PyTorch , Caffe and Theano.Tensorflow 2.0 has been released recently . This session will be a technical session with code demos and brief hands on ( using Google Colab) about building deep learning applications with Tensorflow. We will compare and contrast Tensorflow 1.x and Tensorflow 2.0. There will be something for everyone – those who are new to Tensorflow and Deep Learning , as well as the experts. There will be a lot to learn about Deep Neural Networks, Convolutional Neural Networks and image recognition.
    HALL 2: KNOWLEDGE TALK/ CASE PRESENTATION

  • Information Technology has improved the outdated systems what we think and live in olden days. Nowadays, it entirely changes our professional behaviour and conduct. The people who is lagging behind and not aware of its applications that suffer more and lose many new important opportunities and chances. The better utilization of its applications will provide expert guidance to execute day-to-day activities and organizational demands at fingertips access. In this present era of Operational Technologies, Industries worldwide becoming highly rely on the Technological services for its momentum, maturity, efficiency, consistency, reliability and high level of success Many enterprises have some form of Artificial Intelligence and Machine Learning applications in place, whether it is in the form of a pilot program, a proof-of-concept in the cloud, or even a production implementation. Even though Artificial Intelligence has been around for a long time, it is still an emerging technology for many enterprises. The Growing Global Economy and demand for customized products are bringing the Manufacturing Industry (Industry 4.0) from a Sellers’ Market towards a Buyers’ Market. In this talk, we focus on the importance of Cloud AI, which is a simple mantra for success of Industry 4.0. This success is only possible when we upgrade our industry and adapt the use of Cloud AI.
    HALL 2: KNOWLEDGE TALK/ CASE PRESENTATION

  • 1. Introduction to Neural Networks and CNN 2. Hands on CNN using TensorFlow and Keras on Google Colab Platform. 3. Various CNN architecture and Transfer Learning Techniques for pretrained CNN model. 4. Hands on Transfer Learning Techniques. 5. Convolutional Neural Networks for Object Detection and Segmentation.
    HALL - 3 - WORKSHOP

  • Good generalized machine learning models should have high variability post learning. Tree-based approaches 2 are very popular due to their inherent ability in being visually representable for decision consumption as well as robustness and reduced training times. However, tree-based approaches lack the ability to generate variations in regression problems. The maximum variation generated by any single tree-based model is limited to the maximum number of training observations considering each observation to be a terminal node itself. Such a condition is an overfit model. This paper discusses the use of a hybrid approach of using two intuitive and explainable algorithms, CART 2 and k-NN 3 regression to improve the generalizations and sometimes the runtime for regression-based problems. The paper proposes first, the use of using a shallow CART algorithm (Tree depth lesser than optimal depth post pruning). Following the initial CART, a KNN Regression is performed at the terminal node to which the observation for prediction generation belongs to. This leads to a better variation as well as more accurate prediction than by just the use of a CART or a KNN regressor as well as another level of depth over an OLS regression.
    HALL 2: KNOWLEDGE TALK/ CASE PRESENTATION

  • Usually any list price change of a product is accompanied by some drop in demand (people will start moving to competitor products or wait for prices to drop). It will take some time for this demand to come back to normal level depending on various factors like price level increase, the product for which the price was increase, competitor activity (whether they increase the price or not) etc. We have built a model which will detect the time it takes for demand to come back to normal level after the price change and then predict how long it will take for demand to come back to normal level if we are planning for a price increase in the future. The technique uses combination of machine learning algorithms as well as certain rule based detection algorithms. We are also testing some deep learning image recognition algorithms (CNN) to detect the pattern in order to classify the event as acclimation. But that is work in progress.
    HALL 2: KNOWLEDGE TALK/ CASE PRESENTATION

  • Enhancing memory-based collaborative filtering techniques for group recommender systems by resolving the data sparsity problem. Comparing the proposed method's accuracy with basic memory-based techniques and latent factor model. Making accurate predictions for unknown ratings in sparse matrices based on the proposed method. More users are satisfied of the group recommender system's performance. Memory-based collaborating filtering techniques are widely used in recommender systems. They are based on full initial ratings in a user-item matrix. However, most of the time in group recommender systems, this matrix is sparse and users' preferences are unknown. Recommendation systems are widely used in conjunction with many popular personalized services, which enables people to find not only content items they are currently interested in, but also those in which they might become interested. Many recommendation systems employ the memory-based collaborative filtering (CF) method, which has been generally accepted as one of consensus approaches. Despite the usefulness of the CF method for successful recommendation, several limitations remain, such as sparsity and cold-start problems that degrade the performance of CF systems in practice. To overcome these limitations, a content-metadata-based approach is suitable that uses content-metadata in an effective way. By complementarily combining content-metadata with conventional user-content ratings and trust network information, the approach remarkably increases the amount of suggested content and accurately recommends a large number of additional content items. Experimental results show a significant enhancement of performance.
    Main Hall - TECH TALKS

  • Detecting language nuances from unstructured data could be the difference in serving up the right Google search results or using unsolicited social media chatter to tap into unexplored customer behavior (patients and HCPs). Also, due to the complex regulations and compliances, the Healthcare and Life Sciences industry is known to be slow in adoption of Text analytics and Natural Language Processing. Industries are facing significant challenges in analyzing the data due to its unstructured textual nature. Because of massive digital disruption across the globe, there is sharp rise in the generation of naturally written forms of electronic data. This explosive growth of unstructured clinical data, medical data, regulatory data and healthcare data has prioritized the use of innovative technologies of NLP and Text Analytics. The key challenges in adoption of NLP are – exponential growth in unstructured data in the form of unstructured texts from various business teams within the organization. This ever-growing data remains untouched to identify actionable insights and recommendations that can generate significant value for consumers and patients across the globe. Therefore, detecting potential actions and recommendations from unstructured data could be the key difference in serving up the right insights or deep dive into untouched behavior and actions of physicians and patients. Along with this, the advent in internet and IoT devices are also generating significant amount of data for creating additional value in the market using Natural Language Processing. Many healthcare and life sciences business organizations are progressively moving towards adopting AI driven NLP and Text Analytics capabilities which would help get improved and near-real time insights in unstructured data to derive better results for improved performance across products. This talk will explain our strategy & thought leadership towards adoption of NLP & Text Analytics in Healthcare and Life Sciences industry.
    HALL 2: KNOWLEDGE TALK/ CASE PRESENTATION

  • Day 2 Hyderabad

    January 31, 2020

  • Machine Learning has found its application in various practical domains. In this presentation we will look at how Deep Learning can be used to classify images. Specifically, we will look at Convolution Neural Networks which are a subset of Deep Learning Models to solve a classic problem of computer vision which is to differentiate between two sets of images. Finally we will compare the performance of machines as opposed to those of humans in recognizing images.
    Main Hall - TECH TALKS

  • Siemens is one among the leading market players in the locomotives and mobility business across the world. Our platform Railigent (Rail + Intelligent) serves as a platform for the customers and clients to have a look at how the future of mobility is shaping up! There are many use cases in the rail world where images or videos can be used as unstructured data source. Just to name a few: Identify components which need maintenance and which part of the component exactly seems to have triggered a maintenance -> Save time and effort for engineers who can directly access these images and focus on the pre filtered components and parts for maintenance (Failure Prediction of components in trains in cities like Prague and London) Identify faulty joints in the rails by static cameras which are mounted on the train Monitor train station platforms with a CCTV camera and give an alert for congestions Energy Comparison and Consumption of the locomotives when the ecoCruise mode is ON or turned OFF Monitor number of passengers which get off a train and board a train Monitor number of passengers and location of passengers on a platform and many more, with the help of our ecosystem.
    Main Hall - TECH TALKS

  • In current Scenario in 2019 Middle Class youngster or middle age person spends lot of his earned salary in loans and other expenses and save’s very less amount compared to the people in 2000. Solution Model  Through our research, we will look to apply Unsupervised Learning and Supervised Learning to achieve the following I. Different Clusters of income and expense to help identify pattern of income and expense for the scope of this paper we will use Bank Statement of the Individual II. Provide the Individual with the ability to visualize his income and expense pattern for the span of 5-10 years III. Prepare a predictive model for the Individual which help predict the areas of focus in terms of Savings and future spends
    Main Hall - TECH TALKS

Check Last Year's Schedule

Schedule 2019

Extraordinary Speakers

Meet the best Machine Learning Practitioners & Researchers from the country.

  • Early Bird Pass

    Available from 1st Nov to 6th Dec 2019
  • Access to Exhibition Area
  • Access to food & beverage
  • All access, 2-day pass
  • Group discount available
  • 4,500 + taxes
  • Late Pass

    Available from 10th Jan Onwards
  • Access all Speakers
  • Access to Exhibition Area
  • Access to food & beverage
  • All-access, 2-day pass
  • No Group discount available
  • 7,500 + Taxes