Quantcast
Channel: AI Features - Latest Features & Products in Artificial Intelligence
Viewing all 1184 articles
Browse latest View live

From Viewership Patterns To Movie Trailers, AI & Analytics Are Crucial In Showbiz: Guruprasad Mandrawadkar, Star India

$
0
0

The adoption of analytics, machine learning and related tech in media and entertainment industry has made the industry more viewer-specific and has resulted in a better content generation, among various other benefits. Analytics India Magazine got in touch with the head of business intelligence in one of the biggest media companies — Star India. Guruprasad Mandrawadkar has been leading key end to end engagements in the industry, right from strategic technology consulting to architecture design and taking care of complex implementations to support business.

With over 18 years of experience, he is a pioneer in data warehousing, analytics and big data among others. He has led several teams, providing expert inputs for building strategy and roadmaps in various companies. While he refused to share company-specific details, he spoke to us in length about the adoption of data analytics in media and entertainment industry, the importance of adopting customer analytics, challenges in its adoption among others.

Analytics India Magazine: How extensively is data analytics used in the media and entertainment industry? Please highlight some use cases

Guruprasad Mandrawadkar: Within the traditional linear channel, analytics has been widely used for understanding the viewership patterns, mining the rich datasets (provided by BARC for India markets) for understanding advertiser preference, optimising content scheduling and generating insights into what content is working in what segment.

Some of the widely adopted use cases of analytics for OTT platforms are NBO (Next Best Offers aka recommendation engines), NBA (Next Best Action), targeted advertising, customer behaviour understanding and sentiment analytics. While some of the other use cases are in understanding customer journey across the value chain of movie production and in trailer production.

For instance, the trailer for the movie Morgan was created using AI. Over 100 horror and thriller film trailers were analysed to understand what background sounds should be used and what character emotions to incorporate. The analytics was focussed on looking at musical scores, the emotions in certain scenes (indicated by people’s faces, colour grading, and the objects shown), and the traditional order and composition of scenes in movie trailers. After analysing that, the engine chose the best 10 moments that can be considered for a trailer. The entire proves cut short the time and labour involved in the trailer-making process down from 10-30 days to 24 hours.

AIM: Would you like to share a specific use case where consumer insights played a role — for instance, in marketing or personalising?

GM: Data-driven media and entertainment production houses are using big data and analytics platforms to boost media presence and digital marketing initiatives.

  • The use of analytics can help media companies to solve the questions around tracking user clicks, user reviews, broadcasts and user shares, trailer success, reach of teasers across devices and media types. These contain information which when managed effectively can drive content optimisation.
  • Marketing teams can use analytics platform to understand the campaign health and track what campaign channel is working better over the other. Movie production houses are trying to mine the customer journey across the value chain of the movie production.

AIM: What is the role of customer analytics for improving business outcomes in the entertainment sector?

GM: For an entertainment industry, customers are the advertisers who would advertise on the platform/channel. Here the industry can tap into the rich datasets provided by BARC, which provides information of GRP concentration of advertisers across networks across genres. What Brand is advertised on what network and on what channel can also be used to target customer?

AIM: What are the core focus areas of analytics and BI head in the media industry?

GM: As the technology has evolved rapidly over the past 2 to 3 years,  BI head need to prepare the organisation for upcoming disruptions and understand what key technology capability need to be implemented for migrating to the modern BI platform. A modern BI platform would include implementing key capability revolving around NLP, NLG, Data Preparation, building a Data Science Lab and another analytical capability. The current technology offerings are ripe for experimenting with schema-on-read capabilities where the existence of schema is no longer a necessity for doing an exploratory analysis. Augmented BI is the next cornerstone in building a modern BI landscape.

AIM: How has been your journey in analytics so far? What are the various solutions in analytics, BI and big data that you have worked on?

GM: Personally, outside of Star I was involved in multiple major big data and analytics deliveries. One was building a fraud model for a general insurance company in India. This required sifting through large sets of claims data to identify what claim is genuine and what is a fraud.

The second major experience was running the analytics roadmap and framework proposition for a bank. While I was working at Teradata almost six years back, big data analytics was beginning to emerge as a leading technology solution. People were trying to experiment with Hadoop on one hand but did not have the large-scale expertise on the other hand.

AIM: How has been the growth story of the adoption of analytics and AI in the entertainment industry?

GM: The Indian entertainment industry is getting driven primarily by OTT and sports content, especially non-cricket sports. Multiple sporting events have been driving viewership across markets. It is being used in:

  • Content: Film scheduling on TV is done based on viewership ratings KPI’s and digital uptake by millennia is determining which films to invest in for OTT platforms.
  • Engaging Audience: Sports broadcast programs are now using real-time analytics and animation to play out what-if scenarios to engage audiences, and social media analytics to gauge their sentiment. During the last IPL, one of the leading digital services organisation has engaged with cricket enthusiasts in major metros by providing live match simulation using virtual reality.
  • Team Royal Challengers Bangalore (RCB) has launched its own viewer interactive system “RCB Bold Bot,” an AI-enabled application with chatbot system to enhance viewer’s engagement experience, among others.

AIM: How does analytics help compete with other players in the space?

GM: Social media analytics can be used by movie production houses to predict the optimal release date for a movie, what content will work for a movie success, what content to select while promoting a trailer.

Also, industry can use data collected from social media channels, to gauge expectations of the target audience and the content buzz towards the movie. For example, in the US, ‘Cats & Dogs’ and ‘America’s Sweethearts’ were scheduled to be released on the same date of July 04, 2001. To avoid competition, ‘America’s Sweethearts’ was moved forward by a week to July 13, 2001, but soon a new entrant, ‘Legally Blonde’ was announced to be released on July 13, 2001.

AIM: What are the challenges you face being at analytics forefront?

GM: The biggest challenge is getting the right staffing mix. Secondly, the technology is changing so fast that the tools that we are using for building analytics solutions will soon be overwritten to perform Augmented Analytics. Where to make the right investments will also be a key concern.

The post From Viewership Patterns To Movie Trailers, AI & Analytics Are Crucial In Showbiz: Guruprasad Mandrawadkar, Star India appeared first on Analytics India Magazine.


Industry Mentors Can Help Further The Careers Of Good Data Scientists, Says Sunil Bhardwaj Of SAS

$
0
0

Data scientists are the most wanted professionals in various industries with companies and startups looking for a qualified data scientist to join their teams. While we have covered a lot of articles around what a data scientist job is all about, things a data scientist should have in their resume must have skills for data scientists and others, we bring an exclusive industry view on what it takes to be a good data scientist.

For this month’s theme, we spoke to various data science heads in the industry who have shared their perspective on what it takes to be a good data scientist. The series of interviews would be especially helpful for those looking out to transition into data science team of the companies.

Our first interaction is with Dr Sunil Bhardwaj, who is the currently working as a senior analytics training consultant with SAS India. He is a SAS Certified Data Scientist and has does various partner enablement for various ongoing projects for partners such as KPMG, Wipro in Antimony Laundering, Credit Risk and related areas. He has more than 15 publications in the area of Analytics and Quantitative Techniques in national and international journals.

Analytics India Magazine: What are the key skill sets that you look for while hiring for data science roles, in terms of languages and technical skills?

Sunil Bhardwaj: The key skill sets required for a data scientist role include:

  • Data Management including Big Data, Unstructured data (Knowledge of HDFS, NO-QL, HIVE, PIG, SAS etc.)
  • Analytics including AI, Machine Learning, Forecasting, Optimisation and Experimentation (SAS, SAS Viya, R & Python)
  • Data Visualisation (SAS Visual Analytics and other visualisation tools)

AIM: What are the non-technical skills and traits that a good data scientist should have?

SB: Technical skill helps with hands-on experience but the ability to correctly interpret & communicate this to a general audience becomes more important. Communication is the most important skill that translates analytical findings to a non-technical audience. This skill ensures the final usage of analytics and data science work to be consumed or utilised for the overall business benefit. Three important traits that are important in my opinion from this perspective are:

  • Communication
  • Team Player
  • Leadership

AIM: Do you believe that a good data scientist should be obsessed with solving problems and not new tools?

SB: Solving problems is fundamental to a data scientist. However, tools are equally important. Imagine a surgeon without his surgery kit! Tools are the enabler for any data scientist. Better and faster the tools, better the delivery and solution.

AIM: Is it educational qualifications or experience that matters more to be a data scientist in companies?

SB: Basic educational background matters a lot. Educational background with analytical, mathematical, statistical, basic sciences and computer science courses or skills are pretty fundamental and have an advantage over other traditional courses.

AIM: Who would be a preferred candidate for data science role — one with certification in a full-time course or the one with the executive course?

SB: It depends on the role and the job description. A suitable interview process is still required to gauge the knowledge and skills of the individual.

AIM: What is the best learning curve for a data scientist and the best resources to learn?

SB: Typically, a good data science course may have a duration from 6 months to 2 years. The learning typically starts with fundamental data management and statistical topics and ends in contemporary concepts of machine learning and artificial intelligence. Today there are many resources to learn and become a data scientist. These are:

  • Full-time and Part-time University courses (ISB, IIML, IIMB)
  • Full-time Classroom or Self-paced e-learning Corporate courses (SAS academy of data science)
  • MOOCs (Coursera)

Some of the subjects a budding data scientist should strengthen their base are: Mathematics & Statistics, Computer Science (SQL and OOPS), Big data (Hadoop, HDFS, HIVE, PIG)

AIM: What is the importance of industry mentors for a budding data scientist?

SB: Industry mentors encourage & guide the career path to success. They are highly valuable counsellors who inspire the next-gen with their experience and knowledge. They can provide a listening ear to mentees, keep a watch on their goals, and challenge their thinking to bring out the best in them.

Mentees can learn from their mentors:

  • Practical insights
  • Real life case studies
  • Real challenges for a Data Scientists
  • Mentoring for the job market

AIM: How important is the knowledge of the sector for being a good data scientist?

SB: Domain or sector knowledge is important. Each domain requires a different knowledge type. The healthcare domain may require a different type of knowledge as compared to a financial market domain. The higher the domain expertise or knowledge, the better it is. The minimum requirement, a data scientist should possess is: understand what exactly the sources of the data are, how the data was collected, what biases may exist in the sample, what do the variables represent and what are the objectives of modelling in a domain.

AIM: In a nutshell, what are the 3 must-have skills for a data scientist?

SB: The three must have skills to be a data scientist include: Data Management, Data Modelling or Analytics, Data Visualisation along with business communication.

The post Industry Mentors Can Help Further The Careers Of Good Data Scientists, Says Sunil Bhardwaj Of SAS appeared first on Analytics India Magazine.

Marvel Comics Visionary Stan Lee Imagined A World Full Of Possibilities, Including An AI Sidekick

$
0
0

stan lee

Be it a fan from any “universe”, on 12 November 2018, all comic book buffs mourned the death of Stan Lee, the smiling, cocky Marvel Comics patriarch who helped usher in a new era of superhero storytelling. The legendary writer who also donned many other hats as editor, publisher and producer, was the creator of blockbuster superheroes like Spiderman, Thor and Iron Man, among others.

In the mid to late ‘60s, when the world was going through a heavy bought of consumerism (think Mad Men) Lee brought to the world Iron Man, a highly tech-savvy business tycoon. From the ashes of a weapon of mass destruction, Lee made his protagonist Tony Stark create a powered suit of armour.

Instagram Photo

AI Sidekick

Later on, Lee and Stark went on to create many high-tech devices. The most noteworthy of them was J.A.R.V.I.S. — short for “Just A Rather Very Intelligent System”.

A complete artificial intelligence-based user interface, J.A.R.V.I.S. is a sardonic sidekick to Stark’s egotistical Iron Man. In Avengers: Age of Ultron, J.A.R.V.I.S. was defined by Stark as, “Started out, J.A.R.V.I.S. was just a natural language UI. Now he runs the Iron Legion. He runs more of the business than anyone besides Pepper (sic).”

iron man

In the ‘60s when the world was busy buying “toasted” cigarettes and swanky typewriters, Lee’s vision for J.A.R.V.I.S. was nothing short of revolutionary. Many of the predictions he made back then — right from AI-based personal assistants to facial recognition — have come to fruition now.

Machine Learning In The Marvel Universe

In fact, over the course of his interaction with Tony Stark as an AI programme, it can be seen in the Iron Man anthology that J.A.R.V.I.S. becomes more and more “human” in his personality. In what can now be clearly deduced as a startlingly real example of machine learning, J.A.R.V.I.S. is shown to gain a more natural way of speaking; as well as an evolved sense of humour.

Augmented Reality

In the movies as well as in comic books, J.A.R.V.I.S. is shown to use holograms to help Iron Man visualise data and information better. This can very well be today’s version of augmented reality. With these holograms, J.A.R.V.I.S. helps Stark get an idea of his plans, prototypes as well as frameworks for his new inventions.

Information Retrieval

Throughout the Iron Man anthology, Lee has visualised J.A.R.V.I.S. is as an all-knowing, wise entity, who has information at the tip of his (virtual) fingers. There are many instances where J.A.R.V.I.S. is shown to dig out information, even classified information, against Iron Man’s enemies. Many fan sites and tech buffs have called it one of the first glimpses of what is today known as information retrieval.

Apart from the beloved J.A.R.V.I.S., Lee has also dabbled with other futuristic, tech-savvy creations like Spiderman’s training suit, Ant Man’s suit and many other weapons and machinery for his superheroes and villains alike.

The post Marvel Comics Visionary Stan Lee Imagined A World Full Of Possibilities, Including An AI Sidekick appeared first on Analytics India Magazine.

The 3 Ps To Be A Good Data Scientist Are Programming Foundation, Passion And Patience

$
0
0

Emudhra Banner

For the next interaction in our series of ‘How to be a good Data Scientist’, AIM caught up with Kaushik Srinivasan, Senior Vice President of strategy and Innovation at eMudhra. Srinivasan is one of the founding members of the company, a licensed Certifying Authority (CA) of India issuing digital signature certificates. The company is at the forefront of driving paperless transformation as a part of Digital India campaign and also adopts recent technological innovations such as data analytics and blockchain in its daily working. With a team comprising of Data Scientists and AI researchers, hiring a good data scientist is crucial for driving success and innovation. In a freewheeling chat, Srinivasan shares his views on what it takes to be a good data scientist.

Analytics India Magazine: What are the key skill sets that you look for while hiring for data science roles? What are the languages and technical skills they should know?

Kaushik Srinivasan: Data Science is a broad area and there are a number of tools and languages that are emerging and are promising and when it comes to crunching large volumes of data. We look for knowledge in Python, Scala and more recently the ability to work on Apache Spark and Tensorflow.

AIM: What are the non-technical skills and traits that a good data scientist should have? How important is effective communication and business mindset for being a good data scientist?

KS: The ability to translate business to technology is critical for a data scientist. If you are trying to understand the customer churn at a Bank v/s an Insurance company; while the tech stack that you will use is more or less the same, the domain understanding and the ability to map that to the input you need to use and ability to then select the right algorithm play a vital role in getting desired outcomes. If this is not done correctly, there is a high likelihood of getting false positives. The other quality required is patience since accuracy in data science is a function of iterative trial and error to get the right outcome.

AIM: Do you believe that a good data scientist should be obsessed with solving problems and not new tools?

KS: I would say it’s a combination of domain and technical understanding which is why the role is valued so much in the industry today. Today most complex problems including health diagnosis and treatments are done through technology, more specifically artificial intelligence. Each tool or platform has its own strengths and weaknesses. So, unless you are aware of which tool/platform is most suited to solve the specific problem, there will be a lot of unnecessary effort put in the wrong direction.

AIM: Is it educational qualifications or experience that matters more to be a data scientist in companies?

KS: Formal education in machine learning has more or less become a major prerequisite today. I think that someone with a lot of passion can substitute this requirement but developing a knack for problem solving through real world experience goes a long way in developing a person’s caliber, as they are better suited in their ability to solve complex problems.

AIM: Who would be a preferred candidate for data science role—one with certification in full time course or the one with executive course?

KS: As stated in the previous answer, we look for people who are passionate and who have a background in technology, and we train them while they are on the job. A good certification is certainly desirable but in no way it is an eliminating factor.

AIM: What is the best learning curve for a data scientist and the best resources to learn?

KS: The best learning will be on the job as the data scientist would be looking at data first hand and understands the use of data to drive decision making. Given today’s systems and computational power, data crunching has become a real time exercise thus giving data scientists who have the passion, a quick ability to impact the company’s bottom line.

AIM: What are the subjects a budding data scientist can master during the early days of his/her education and career to be a good data scientist?

KS: Strong understand of programming concepts such as object oriented programming, database design, clustering, multithreading are a must.

AIM: What is the importance of industry mentors for a budding data scientist?

KS: Industry mentors bridge the business technology gap, and getting a good mentor will always help accelerate the learning curve of a budding data scientist.

AIM: How important is the domain knowledge for a data scientist?

KS: Domain understanding is critical to succeed in the data Analytics world, but then again this is something that has to be learnt on the job.

The post The 3 Ps To Be A Good Data Scientist Are Programming Foundation, Passion And Patience appeared first on Analytics India Magazine.

Experience Wins Over Educational Qualifications In Data Science, Says Swapnasarit Sahu Of Zeotap

$
0
0

Swapnasarit Sahu who is the head of Data Science and Analytics At Zeotap has interesting insights to share on what it takes to be a good data scientist. As an experienced professional with a demonstrated history of working in various domains, he has built many cutting-edge AI products. At Zeotap, he leads a team of data scientists and data engineers to deliver build products from scratch for data monetisation in the telecom industry. He has been instrumental in building user behaviour models, recommendation systems, data quality monitoring and more.

Analytics India Magazine: What are the key skill sets that you look for while hiring for data science roles? What are the languages and technical skills they should know?

Swapnasarit Sahu: Data Science is making data as the centrepiece for decision making and to extract insights, form or learn about patterns in data. In my view, you have to define your overall business goal, derive problems you want to address and solve, and then match the skill sets. Some of the key roles are:

  • Kind 1: People skilled with advanced statistics and economic theories. In terms of language to code: Python, R or SAS
  • Kind 2: People skilled in machine learning (or deep learning). In terms of language: comfortable with Python, Scala, Java or C. If you are dealing with big data then familiarity big data framework (like Hadoop or Spark or Apache Flink ). For these set of people, algorithmic thinking is also very important. So, it’s important to be really good in data structures and algorithms.
  • Kind 3: If you are dealing with supply-chain people familiar with AI planning or queuing theory and optimisation. In terms of coding language: Python or C mostly.

If you are an organisation dealing with a lot of text data then NLP techniques are very much necessary. Python is awesome and easy to manipulate text. You could live entirely in the pythonic world.

For deep learning Tensorflow is also becoming quite popular, but it’s not mandatory though since many things can be achieved using PyTorch and keras.

AIM: What are the non-technical skills and traits that a good data scientist should have? What is the importance of effective communication and business mindset in DS?

SS: Having a curious mind is one the most important aspect of a data scientist. He or she should be aware of what is hypotheses premise (assumptions) and at what level can this hypothesis fail.

Businesses today are mushrooming on data sets, and it important to know a business since all the questions asked about the data is related to business. Once you are aware of the business and the outcome you want to achieve, effective communication also becomes critical.

AIM: Do you believe that a good data scientist should be obsessed with solving problems and not new tools?

SS: Well, like any engineer tools make life easier. Tools help you visualise the data, extract patterns and build better and improved models. In fact, many big data problems you can’t solve without right toolset. For instance, how one can process terabytes of data without big data frameworks like Spark or Flink. So, it’s important to know tools but doesn’t get carried by a particular set of tools.

AIM: Is it educational qualifications or experience that matters more to be a data scientist in companies?

SS: It is often observed that companies hit a roadblock given they were not able to adapt to different business models or change. The fundamental problem was that the company could not predict or understand how the business landscape could change overnight with the advent of new technologies.

If you are a data scientist, you must be someone who would understand beyond monolithic systems. Experience matters way more than educational qualifications. Many good data scientists are self-learned people. During education, we play with a quite ideal world where the standard algorithms work in datasets given. In real the world, it is quite different. It’s really hard to make sense about data given the amount of noise or network effects exists in them. The more problem you solve the more expertise you get how to process the data for a given business case. What things can be achieved, with given data in hand.

AIM: Who would be a preferred candidate for data science role—one with certification in the full-time course or the one with an executive course?

SS: In my view, haven’t seen any good results yet from executive courses offered by institutes in the country. The current executive course helps to develop a mindset, but it’s quite basic in nature. They hardly cover things in depth or talks about the state of the art. Many people during the coursework never worked in a real dataset which we see every day in the business world.

So, I would suggest that self-learning is critical in this field. One need to read research papers keep themselves updated about the state of the art. Also Applying techniques to open datasets available helps a lot.

AIM: What is the best learning curve for a data scientist and the best resources to learn?

SS: Now, there are a lot of wonderful online courses available to learn the basic concepts and codes on GitHub are available to practice. These are the great resources to learn. There also data science competitions like Kaggle which consists of real-world datasets to practice. Candidates can choose any of these options, learns from books or online resources, play enough with data to get the concepts right. It is also important to ask questions such as why certain algorithms behave the way it is behaving.

AIM: What are the subjects a budding data scientist can master during the early days of his/her education and career to be a good data scientist?

SS: Some of the key areas are:

  • Linear algebra, probability and statistics for the first kind
  • Linear algebra, probability, elementary calculus, machine learning fundamentals. some important CS courses like data structure and algorithms for the second kind
  • Linear algebra, probability, machine learning fundamentals and NLP for those who want to choose NLP as career path
  • Linear algebra, probability, Al planning like stochastic algorithms (Genetic algorithm, ACO and PSO etc.) and discrete optimisation for supply-chain planning
AIM: What is the importance of industry mentors for a budding data scientist?

SS: Mentor should guide the team to ask the right questions about the data and the business. They should help them set up a realistic goal and expectations about an experiment and translate industry problems to a data science problem whenever needed. Industry mentors have a clear understanding of business problems faced by industry and they can guide with the right direction to tackle new problems which were not there before. Many jobs have become obsolete in this era but there is a need for accountable and transparent AI systems. Guiding data scientists towards these will help their carrier immensely.

AIM: In a nutshell, what are the 3 must skill to be a data scientist?

SS: The 3 must-have skills according to me would be

  1. Ask the right questions about data
  2. Learn to measure a failure or success of an experiment with right matrices.
  3. Be able to implement things fast and learn to fail fast rather than the perfect result

The post Experience Wins Over Educational Qualifications In Data Science, Says Swapnasarit Sahu Of Zeotap appeared first on Analytics India Magazine.

Quantitative & Numerical Competency Makes Or Breaks A Good Data Scientist: Joydeep Dam, Bridgei2i

$
0
0

Our next interaction for this month’s theme is with Joydeep Dam who is the director of algorithms and artificial intelligence at Bridgei2i Analytics Solutions. He has over 16 combined years of experience in the field of analytics, algorithm development, and quantitative disciplines. Prior to this, he has held various key positions in multiple different organisations. Here are his views on what it takes to be a good data scientist.

Analytics India Magazine: What are the key skill sets that you look for while hiring for data science roles? What are the languages and technical skills they should know?

Joydeep Dam: Some of the key skill sets are:

  • Quantitative and numerical competency and aptitude are critical to becoming a good data scientist. Clarity of fundamental concepts is extremely critical.
  • Regarding technical skills, there are far too many to be named. Good knowledge in a few of those is preferable. Whatever techniques they know, they should have a lot of depth in those.
  • In terms of language (programming), Python, R and C/C++ is necessary.
AIM: What are the non-technical skills and traits that a good data scientist should have? How important is effective communication and business mindset for being a good data scientist?

JD: Some of these are:

  • Ability to communicate effectively is extremely important to be a good data scientist. Quite often a data scientist will need to explain abstract concepts to a non-technical audience. It will be almost impossible to do that without good communication skills.
  • Without a business mindset, the solutions tend to become more detached from the real problems and become more academic in nature. Since a data scientist is expected to tackle real-life problems most of the times, a business mindset or a genuine appreciation of practical problems is crucial.
AIM: Is it educational qualification or experience that matters more to be a data scientist in companies?

JD: The right educational qualifications build the foundations to be a good data scientist. The right kind of exposure and experience allows him to get the proper perspective and understanding of where and how to use his skill sets.

AIM: What are the best resources for data scientists and what are the subjects they should master during the early days of his/her education to be a good data scientist?

JD: There are multiple resources that a data scientist can use: peers, universities, open source contents, books, etc. Some of the subjects such as maths, stats, programming are good skills to pick up in the early days.

AIM: What is more important, technology tools or algorithm concept learning?

JD: Both are equally important. Without tools, the algorithms can’t be implemented, and without conceptual understanding, you won’t know what you are doing with the tools. It can be thought about as the engine and the wheels of a car. The car won’t move ahead unless both are there.

AIM: How important is the sector known for being a good data scientist?

JD: For a data scientist focusing specifically on a particular sector, it’s very important to have deep knowledge of that sector.

AIM: In a nutshell, what are the 3 must-have skills to be a data scientist?

JD: Knowledge and conceptual clarity regarding the fundamentals, knowledge of appropriate tools and platforms, deep understanding of the problems.

The post Quantitative & Numerical Competency Makes Or Breaks A Good Data Scientist: Joydeep Dam, Bridgei2i appeared first on Analytics India Magazine.

NetApp’s 5-phase Model Visualises Seamless Flow Of Data In IoT Projects: Deepak Visweswaraiah

$
0
0

Analytics India Magazine got in touch with Deepak Visweswaraiah, Senior VP and MD, Manageability Products and Solutions Group, NetApp, to understand more about how the company used analytics in their organisation. Speaking as a technology veteran with over 30 years of work experience, Visweswaraiah talked about the importance of new tech in the industry, the global talent and the upcoming trends.

Here are the excerpts:

Analytics India Magazine: Please tell us about NetApp’s usage of analytics to provide solutions

Deepak Visweswaraiah: In today’s digital world, data is considered the lifeblood of most organisations who are looking to thrive and stay ahead of the competition in an extremely dynamic technology industry. Our data analytics solutions are powerful tools that can help companies draw valuable insights into their businesses. Then we help them through this wealth of insights to optimise operations, create innovative business opportunities, monitor and improve services and performances. It is all about making informed decisions, enabling new customer touchpoints through technology and getting a competitive advantage overall.

With more than 25 years of experience in data management, NetApp has developed its own five-phase model that visualises the flow of data in IoT projects and solves special challenges of data management. The phases are defined as collect, transport, store, analyse and archive. This concept is the foundation of Data Fabric, our data management strategy that allows free seamless movement of data in the most efficient and secure way possible.

NetApp Cloud Sync Service is designed for data migration and synchronisation. The NetApp® Cloud Sync service offers a simple, secure, and automated way to migrate data to any target, in the cloud or on premises. Once the data is migrated, Cloud Sync continuously syncs the data, based on your predefined schedule, moving only the deltas, so time and money spent on data replication are minimised.

AIM: Tell us about some of the use cases on how various industries in India have benefited from NetApp’s cloud services.

DV: NetApp’s cloud data services enable our customers to deliver meaningful business outcomes quickly and cost-efficiently, eliminating lengthy IT processes and complexities. By 2020, the public cloud services market in India is expected to reach $4.1 billion according to Gartner. Today, India is second only to China as the largest and fastest-growing cloud services market in the Asia Pacific.

Some of our customers in India include HCL Technologies Limited, which uses the NetApp AltaVault cloud-integrated storage appliances to securely backup data to any cloud, boost recovery and cut costs and Tata Consultancy Services (TCS) which uses NetApp storage technologies. We have also been working with the Indian government to assist them in the ‘smart city’ as well as security-based projects.

AIM:  Who are your partners and who are you looking to collaborate with, in the near future?

DV: We believe that partners are essential to helping customers build the outcome-centric IT infrastructures that they need to thrive in the digital age. NetApp boasts of a strong ecosystem of partnerships and some of its partners include CISCO, Microsoft, Intel, Fujitsu, VMWare, Accenture, HP, IBM, Lenovo and SGI. In India, we have partnered with Unisys, Cognizant, IBM, Wipro, and Dimension Data among other organisations.

Whether it is our recent partnership with DreamWorks or Lenovo, the goal is to enable bringing leading technology and scale to enable customers to modernise their IT architectures from the edge to the core network to the cloud.

NetApp is also contributing to the startup ecosystem in the country through the NetApp Excellerator, the company’s flagship startup accelerator program based in India and we have forged strategic partnerships with 4 startups through the last two cohorts.

AIM: How does NetApp compete with some of the competitors in its industry?

DV: One of the main reasons for the success of NetApp is that we hold the success of our customers paramount. There is an ongoing effort to innovate and keep pace with the technological advancements, alongside setting industry standards. We are also making investments in building the foundation for using data alongside artificial intelligence and machine learning. It is through innovation alone that we compete with companies that are several sizes bigger than us. We look at how to turn threats into opportunities and use that to our advantage. The biggest example is that only NetApp has partnerships with Amazon, Microsoft, Google and Alibaba to offer hybrid cloud data services. Today all the biggest cloud providers who were considered threats are our partners!

AIM: How has been the growth story of NetApp so far? What are the challenges that you face as a company?

DV: Over the last two-and-a-half decades, we have transformed as a company from a legacy storage provider to a company positioned to help businesses accelerate outcomes through digital transformation. We have transformed our own IT by refocusing it and made the bold decision to retool our go-to-market platform. And the journey so far, to say the least, has been phenomenal. This reflects in the fact that NetApp has been able to cement its position as an industry leader in the hybrid cloud space today.

One of our main challenges is to ensure the security of the data we deal with, without affecting the progress of the business. We understand the importance of data privacy and implement the highest data privacy foundation for our own business. Our team is actively seeking to make our products future-proof – not just in making them secure, but in all ways possible.

AIM: Tell us about NetApp’s global giving program #NetAppServes which focuses on social causes, including providing education to underprivileged kids.

DV: Serving the communities around us is a big part of NetApp’s culture of Giving Back. As part of our Corporate Social Responsibility, we have several initiatives in place to address the issues related to hunger and malnutrition, education to sanitation, and hygiene.  To enable this, we have a Volunteer Time Off (VTO) program through which employees get five days of fully paid volunteer time off each year.

We have been working with local NGOs to provide digital education, including basic coding and programming, to children from underprivileged homes. Recently, our CSR and Women In Technology (WIT) teams rolled out a new initiative to introduce middle school girls to STEM (Science, Technology, Engineering and Maths) education.

Women worldwide are grossly underrepresented in STEM fields with fewer girls opting for STEM subjects at the university level due to various factors. When young girls are mentored by these women technologists, they have role models they can relate to and look up to. NetApp employees have also been mentoring students from Parikrma Humanity Foundation and teaching them basic computer programming.

AIM: What do you think people from a very little background on computers and analytics should do to upskill themselves and master in the field?

DV: It is a good time to be a professional in the space of Data Science as it has some of the most promising jobs to offer today. Any individual planning to become a data analyst would require a multidisciplinary skill set including data wrangling, predictive analysis, data visualisation, domain knowledge and good business acumen. Foremost though would be a love and passion for number crunching and analysing data/statistics.

A background in engineering/mathematics/data science (such as a Master’s or PhD) will make transitioning into the job of a data analyst easier. Skills such as Hadoop or Big Data can also be useful. Those with little background in the subject can also make use of the various resources online or opt for internships and mentoring programs to upskill themselves.

AIM: What are NetApp’s objectives in 2019? What are your long-term goals?

DV: NetApp, for long has been constantly working to provide our customers with the choice of how they want to leverage their data and workload. I mentioned the Gartner report that predicts that by 2020, the public cloud services market in India is expected to reach $4.1 billion. According to them, the additional demand from the migration of infrastructure to the cloud, as well as increased demand from compute-intensive workloads, both in enterprise and startup space, is driving this growth.

IDC forecasts that by 2025, the global datasphere will grow to 163 ZB (a trillion gigabytes). More than a quarter of this data will be real-time in nature, and real-time IoT data will make up more than 95 percent of this. IDC believes the future of enterprise storage is software-defined, server-based, and cloud-connected, with a full suite of enterprise storage data services.

The NetApp Data Fabric simplifies the integration and orchestration of data for applications and analytics in clouds, across clouds and on-premises to accelerate digital transformation. Only NetApp can deliver a Data Fabric with consistent data services for data visibility and insights, data access and control, and data protection and security.

The post NetApp’s 5-phase Model Visualises Seamless Flow Of Data In IoT Projects: Deepak Visweswaraiah appeared first on Analytics India Magazine.

With 7.5 Petabytes Of Data Processed Daily Across 570 Channels, BARC Is Leveraging Analytics And ML To The Fullest

$
0
0

BARC, an Indian TV audience measurement and insights company, is built upon a strong future-ready technology backbone. With a robust panel of 33,000 homes, across the length and breadth of the country, they are certainly the largest measurement body of its kind in the world. They have granular data cutting across geographies, age group, gender and socio-economic groups. In fact, their measurement data empowers a TV broadcast industry worth over $3 billion in terms of advertising revenue.

The sample size is larger than that of many global countries and they use analytics and data science to make contextual programming and advertising decisions. Analytics India Magazine got in touch with Romil Ramgarhia, COO of BARC India who has extensive experience across media, telecom and manufacturing sector, who talks about dataset curation as a service, analytics for managing audience, machine learning play at BARC and more. He has worked with the likes of Viacom18, Bharti Airtel, Asian Paints and ACC, in different capacities, before joining BARC.

AIM: What is your take on dataset curation as a service and why do companies want to outsource these services?

RR: At BARC India, dataset curation entails processing high volume of complex data systems daily. We have overcome existing silos and process limitations to holistically look at data as “data” only and provide an unbiased view to the industry at large. Unlike traditional business processes or technology, the infrastructure is difficult to build in-house and thus most organisations find value in engaging an external partner who can bring in the technology know-how, discipline and governance. However, at BARC, the end-to-end process from data capture to data release is built and managed in-house by a team who brings in the technical know-how and is involved in the daily operations 24×7. Technology is one of the largest teams we have at BARC. We process 7.5 Petabytes of data daily for over 10 Mn+ viewers records.

AIM: How is the data collected from various channels? What are the kind of datasets provided by BARC India to the companies?

RR: The world-class Watermarking Technology which is currently in place identifies the time-stamp and channel being watched. The watermark is an encrypted code inserted into the channels’ content feed. The watermark technology is two generations ahead of all other technologies implemented in the space and is platform agnostic. Our BAR-O-meters which are installed in the panel homes captures this information by reading the watermark and relaying it to our servers. The BAR-O-meter device was indigenously designed by BARC India at 1/6th the cost of globally available meters. This has helped us achieve scale despite being one of the most diverse television markets in the world. We also deploy a Fingerprinting technology which is used for coding and identifying the playout content i.e. Programs, Promos, TVCs.

An analytics software known as BARC India Media Workstation is available with all subscribers, which offers 6,000+ data cuts for granular analyses and better planning. Looking beyond just data, we also offer a range of custom products which provide data-driven, actionable insights to the industry.

AIM: How are the datasets provided by BARC India helping the media industry? Would you like to highlight few use cases?

RR: BARC India TV viewership data is the currency basis which over 3 billion USD of advertising decisions, and 2 billion USD investment on content and marketing spends are made. Granular data helps broadcasters improve their programming strategy, and aids brands to plan better and target the right audience. While mediums such as Print, and Radio are difficult to measure, TV viewership and campaigns are measured accurately with our data to analyse the impact and return on spends.

Let’s’ take the example of the ‘Swachh Bharat’ Campaign which was advertised massively on Television to educate the citizens and promote the use of toilets. Between 2016-2018 the central government aired 4.8 lakh spots on various channels for this cause. As per the finding of our Broadcast India 2018 Survey, 71% of Indian homes today have access to toilets. In 2016, this stood at 58%. This is the impact and reliability of what we do. Also the data has helped many broadcasters plan their new channel launches across SD and HD feeds. We started with measuring 277 channels in 2014 and today we are measuring 570+ channels.

We have also introduced ‘BIO News’, a specific product for the TV News Channels. News is dynamic, and an event led genre with no fixed programming unlike a Daily soap channel. The BIO news product gives detailed insights into editorial aspects such as the air-time of a news story, performance of news anchors and impact of famous personalities appearing on news channels. In wake of the upcoming General Elections, our insight tool will help News broadcasters to drive effective programming and derive the maximum ROI.

AIM: How is analytics used in managing a transparent accurate, and inclusive TV audience measurement system? How is it benefiting TV views?

RR: The transparent methodology and panel representativeness makes our data reliable enough to be used as a trading currency of the broadcast media in India. Our automated panel management system manages the complex process of recruiting and training panel homes through real time tracking of field agents and field tasks. The live viewership data flows from the metered homes into our servers. This is processed by multiple ML and AI applications for quality checks, before the final viewership data is released.  

Broadcasters and content producers use our analytics and insights to improve their programming strategy, which is one of the key factors leading to increased TV viewership year on year. Our data also empowers Media Agencies and Advertisers to plan TV campaigns more effectively and reach the right audience, which in turn increases their return on spends.

AIM: What are the analytics and machine learning tools used by BARC India? How does the technology stack look like?

RR: At BARC India we could anticipate the growing complexity of data and the clear need for Machine Learning. With as much as 7.5 Petabytes of data being processed every day by 900 CPU cores, we require platforms which provide high processing power and are also scalable with the growing data volume. We are currently leveraging cloud services such as Spark and Hadoop for big data management.  We also use Python for some of our analyses, statistical modelling and data simulations. We would be among very few companies in the world to have moved our entire core production stack on big data platforms.

AIM: Please tell us about the enterprise business intelligence and analytics platform by BARC India?

RR: BARC India Media Workstation (BMW) is our business intelligence and analytics platforms, powered by Markdata, a leading, global audience analysis company. The software allows the users to generate custom data sets from our viewership data since October 2015. Users can generate reports at a channel level, programme level, and even for TV ads and promos. We report 570+ watermarked channels through the software, including HD channels. Multiple demographic cuts are available viz. 16 state groups, 9 age-cuts, 5 pop-strata, and 3 NCCS groups among others. The software also allows one to generate viewership reports at a minute level for an extremely granular and in-depth analysis.

BIO News is also one of our proprietary analytics platform, customised for broadcasters in the News space. With a graphical and dynamic interface, the platform gives data-driven insights across markets and Target Groups. It has the capability to map a news clip with the viewership build up, to identify the high and low points of a story that was aired.

We are in the process of creating more such insight-based platforms that will address the specific needs of various genres.    

AIM: How is machine learning used at BARC India in daily operations? Would you like to highlight few use cases of how you have been benefited with the adoption of ML?

RR: We process viewership data from 160,000 plus individuals every 8 seconds, and then combine this data with the demographic diversity and the broadcast content. The main challenge is finding anomalies in the data and take decisions on the data well in advance. Initially, we used SMEs to constantly monitor and determine these anomalies. But with the increasing data volumes and hundreds of permutations and combinations, we quickly moved to a combination of Statistical Models (Machine Learning), SME knowledge base and local intelligence to determine a course of action. With the adoption of AI and ML, the system can explain viewership spikes and highlight the outliers. For example, every year on Republic Day and Independence Day there is a distinguished peak in viewership, which the system attributes to the event instead of flagging it as an anomaly. However, this is a continuous journey for us and we are improving our ML capabilities with each passing day.

The post With 7.5 Petabytes Of Data Processed Daily Across 570 Channels, BARC Is Leveraging Analytics And ML To The Fullest appeared first on Analytics India Magazine.


Here’s How IBM Researchers Are Making Chatbots More Intuitive With Natural Dialogue Flow

$
0
0

Chatbots have revolutionised the customer services space with most industries resorting to this ingenious piece of innovation. Be it booking a flight, ordering food, playing music or anything else, we can now refer to these familiar friends who are there for us 24×7. Building chatbots today is fairly easy as there are various different frameworks available today for building them. However, not many of them meet expectations.

While today’s frameworks are easy to model and deploy, most of the chatbots lack the ability to perform a simple task and the conversation flow. It can be quite intimidating to induce natural dialogue flows and richness of natural conversation into these bots. It should be able to answer what are my users going to ask, how are users going to ask them, how should it respond to them, how is the conversation going to proceed and more.

The team at IBM Research is using past human to human logs and analysing them using deep learning and state of the art natural language processing techniques to get answers to these questions. The analysis of human to human logs provides suggestions to a dialogue designer on what are the key things users are going to ask as well as the dominant ways in which users are going to express themselves.

To understand more about how IBM is overcoming the challenges of designing a chatbot, the evolution of chatbot industry in India and more, Analytics India Magazine got in touch with Gargi Dasgupta, Sr. Manager at IBM Research India, who leads the cognitive tech support area. She shares some interesting insights on the subject in this detailed interview.

Analytics India Magazine: While we see a lot of chatbots coming up in the market, their practical implementations aren’t quite successful. What is the reason for it? How can we overcome these?

Gargi Dasgupta: A lot of the failures with current chatbots in the market are because of two main factors:

  1. Over-dependence on the manual effort for creating chatbots which makes them both time consuming and brittle
  2. Lack of an end-to-end fall-back plan with continuous learning and improvement.

We at IBM Research are working towards training these applications with data by infusing AI and related technologies to help organisations get business value.

We are working on scalable domain modelling from unstructured data like chatlogs and documents so that domain knowledge can be created leveraging the existing data. Chatbots can then learn from the existing domain models. We are also working on techniques where continuous feedback can be learned by the chatbot to improve the conversations for the next time.

AIM: What are the major challenges in building a fully-functional and conversant chatbot? Why do chatbots fail to have the richness of human conversations despite several attempts?

GD: The biggest challenges in building chatbots are:

  1. Handling nuances of the user language
  2. Handling domain and context understanding
  3. Making them affordable and useful

Human conversations are very rich and diverse in the sense of usage of words, sentences, languages, emotions and contextual references. It is natural for humans to use domain-specific references in their conversation. For example, when asked the question: “Do you enjoy Apple?” in the middle of a workplace discussion, humans are likely to infer the domain and interpret that the question relates to the company Apple and not the fruit. However, this contextualisation is hard for a machine and one of the reasons why chatbots may fail to impress.

Another challenge is natural human conversations contain a lot of cross-references. For example, a follow-up question is, “How does that compare with your previous one at Facebook?” is actually asking for a comparison of the stints at Apple and Facebook. Co-relating “that” to reference of the stint at Apple is intuitive for the human to understand but harder for the machine.

Thus, the lack of richness in chat conversations can be attributed to technology not yet being able to handle the whole diverse range of human language variations, domain contextualisation, cross-references, colloquial expressions, and emotions.

AIM: How can companies make use of the continuous user inputs that are fed in a chatbot? How can intelligence be derived from consumer insights?

GD: Companies can drive insights from user inputs and strive to continuously improve the quality of conversations. Users often give explicit feedback (e.g. thumbs up or down) or can give implicit feedback (e.g. saying thank you or abruptly closing the session and a few tries). All these are valuable inputs and are being used to retrain the current dialogue models. Using AI models, user feedback is fed back into the dialogue box helping to learn new things leading to answers improve over time.

AIM: How have been the evolution of chatbots industry in India?

GD: The enterprise chatbot evolution in India started with everyone being wowed by that fact that they can have a 24×7 channel of communication with their customers. In addition, they can delight their customers with an instantaneous response to their queries. For businesses, it opened up yet another channel of communication with their customers.

India is a key player in the chatbot market today where chatbots have been introduced by many banking and insurance companies. We have seen banks and insurance providers to be one of the early adopters of Chatbots in India helping customers for bill payments, mobile recharges, booking travel, so on and so forth. Further many are looking at offering more services like register a claim, get a quote and help customers understand their various policies better.

We realise in all of this, one big challenge for chatbots in India is the support for local languages. 90% of the new internet users are expected to be non-English speakers in the next few years. Research is trying to work with linguists alongside technologists developing verticalised vernacular models for handling local languages.

AIM: What are the efforts carried out by IBM in the area of chatbots? How is the company using deep technologies to continuously improve chatbots?

GD: Enterprises today have existing customer care support channels where end users can either call or chat with human agents to resolve their problems. These human-to-human conversation logs (h2h logs) are a goldmine of customer insights such as “what are the problems being faced by customers” and “how agents are handling and resolving customers’ problems”. IBM is using AI techniques to analyse these logs and generate customer insights automatically. These insights are used by enterprises for modelling of the domains for which they need the bots.

The other big challenge that IBM is focussing on is to help chatbot technology to advance for taking seamless actions in the enterprise. 50% of chat users feel today chatbots cannot execute tasks where the real benefits lie. However, with the help of integration into enterprise APIs and evolved reasoning and understanding IBM is looking forward to a world of seamless question-answering, taking actions and falling back to live agents whenever required.

AIM: What are the key points to keep in mind while designing a chatbot? What are the areas the IBM researchers, in particular, give a lot of importance to?

GD: While designing chatbots it is crucial to keep in mind the affordability of creating chatbots and their usability. Chatbots are only useful when they help a particular business function to improve. Enterprise chatbots today need to address two fundamental challenges:

  1. Robust modelling of business function(s) for which questions are to be answered
  2. Seamless handling of answering questions, taking actions, falling back to live agents

IBM Research is focussed on spearheading the adoption of chatbots in the enterprise by working on the above research problems.

AIM: How analytics play a crucial role in building smart chatbots?

GD: Analytics can give us pointers to how the chatbot will be used and also tell us how the current chatbot is performing. It also provides pointers to areas of improvement in the form of commonly misunderstood questions and intents.

AIM: How IBM makes use of customer insights to drive automation?

GD: By 2020, 30% of the enterprise tasks will be executed by automation bots. Conversation provides a natural and intuitive interface for executing the automation of simple, repetitive tasks. The way to drive the success of these virtual assistants in the enterprises lies in combining automation through conversational interfaces. For e.g. the ask “Can you help me transfer $100 to account Y” requires integration with backend enterprise APIs as well as entails an understanding of the transfer action, the amount of money and the payee account. IBM is focusing on driving automation through conversational interfaces by focusing on developing the backend integrators as well as the front-end NLP or ML techniques for user question understanding.

AIM: What is the cost involved in building a chatbot? Do these costs compensate for the benefits?

GD: The cost involved in building a chatbot includes:

  1. A one-time investment in understanding the domain, the target audience and the use case for which the chatbot is being built. This involves effort from domain experts as well as a data scientist. Currently, it takes 3-6 months for a data science team to build a chatbot for a domain.
  2. Once built, frequent re-training or “care and feeding” of the chatbot is needed. Depending on the amount of re-training to be done, it can take from a few hours to a few days.

Factoring these two, costs and depending on the complexity of the domain it can take 5K to 40K to build an enterprise chatbot. The target is to make chatbots much more affordable by reducing the chatbot creation time to days instead of months.

AIM: How do you intend to revamp customer services through AI? Would you like to highlight use cases?

GD: Today, customer expectations are very high, and they would like to have a 24×7 connection with high levels of speed and accessibility. Industry-wide there are about 265 billion customer calls every year of which 50% go unresolved. At the same time, support agents struggle with the deluge of updated product documentation and new releases. AI can help transform this support industry in 3 ways:

  1. Create customer-facing technologies (like a virtual assistant) that can understand customer intents and personalise and contextualise responses to customer
  2. Integrate with backend information systems which access key customer information as well as enterprise capabilities so that useful advice and actions can be offered whenever possible
  3. Act as assistants to the thousands of support agents as they handle complex customer cases by presenting insights and recommendations

The post Here’s How IBM Researchers Are Making Chatbots More Intuitive With Natural Dialogue Flow appeared first on Analytics India Magazine.

Party Loving Genius John von Neumann Changed How We See The World Through Game Theory

$
0
0

Eugene Wigner, the Nobel Prize-winning theoretical physicist had once said, “One had the impression of a perfect instrument whose gears were machined to mesh accurately to a  thousandth of an inch.” He was not talking about any latest gadget, he was talking about the mind of John von Neumann.

Computer science, philosophy, economics, political science and biology are only a few of the numerous fields where Von Neumann’s work impacted greatly. He practically invented the Game Theory and applied the findings to a range of applications. He didn’t stop here, his contributions to statistics and fluid dynamics were also seminal.

To put it mildly, the world would have been much different and less technologically forward if not for the genius and work of Von Neumann. He even worked in the field of artificial intelligence with none other than Alan Turing, when the British scientist visited Princeton. Von Neumann was also a key contributor in building the Monte Carlo method, which helps scientists to build solutions to be approximated using random numbers.

How It All Started

Von Neumann was a child prodigy and a genius. He had already mastered differential and integral calculus by the age of eight. His school, the Fasori Evangélikus Gimnázium in Budapest also shaped minds like Edward Teller (the father of Hydrogen Bomb), Leo Szilard (Nuclear Science) and Eugene Wigner among others.

By the time he was 20, Von Neumann had won the Eotvos Prize — Hungary’s highest prize for mathematics. George Polya, who was one of the best mathematical minds, said of his pupil, “Johnny was the only student I was ever afraid of. If in the course of a lecture I stated an unsolved problem. He’d come to me at the end of the lecture with the complete solution scribbled on a slip of paper.”

Von Neumann worked on the set theory and submitted his thesis in 1926 and was instantly branded as a genius. He would walk into conferences and hordes would gather around him. He was the youngest lecturers to be appointed at the University of Berlin where he worked from 1926 to 1929 and taught at Hamburg from 1929 to 1930. Von Neumann even studied under the great mathematician David Hilbert at Göttingen and was already an academic celebrity.

Princeton Years

Von Neumann went on to become a visiting professor and then a permanent professor at Princeton where he taught mathematics and physics. One of the fellow professor talking about his mathematical teaching skills said, “His fluid line of thought was difficult for those less gifted to follow. He was notorious for dashing out equations on a small portion of the available blackboard and erasing expressions before students could copy them.”

But there was a contrast that in his Physics teaching where people felt he was clearly able to flesh out complex ideas and make them simple. “For a man to whom complicated mathematics presented no difficulty, he could explain his conclusions to the uninitiated with amazing lucidity. After a talk with him, one always came away with a feeling that the problem was really simple and transparent,” said one of his students.

He also grew accustomed to throwing great parties which were said to be “frequent, famous and long”. Through these long parties, von Neumann made friends in many fields and got his hands dirty in many fields like representation theory and ergodic theory which gave him ideas about to give birth to game theory. Stan Ulam, one of his contemporaries said, “Von Neumann’s awareness of results obtained by other mathematicians and the inherent possibilities which they offer is astonishing. Early in his work, a paper on the minimax property led him to develop ideas which culminated later in one of his most original creations, the theory of games.”

Contributions to Economics, Computer Science

Von Neumann’s book on Game theory said that many economic situations could be modelled as the outcome of a game between two or more players. The book also gave birth to the idea of utility theory in economics. Today, game theory is applied to economics, law, political science, and sociology.

He also took interest in computer science and computation to figure out how computers can have a simple and fixed structure to do any kind of calculation, given the programmed control, without touching the hardware. The now-famous Von Neumann architecture is a design model inspired by him is stored-program digital computer that uses a processing unit and a single separate storage structure to hold instructions and data at the same time.

Von Neumann also has many other influences on many fields which have changed the world for the better. Wigner aptly summarised, “There are two kinds of people in the world: Johnny von Neumann and the rest of us.”

The post Party Loving Genius John von Neumann Changed How We See The World Through Game Theory appeared first on Analytics India Magazine.

Zeroth.AI Has Invested In 33 Companies From 15 Countries In Just One Year: Sachin Unni

$
0
0

Zeroth.AI accelerated and invested in the first cohort of startups in India. The AI focused accelerator is working with Deepcore, a Japanese incubator and fund that is part of the SoftBank group. Analytics India Magazine got in touch with Sachin Unni, the partner at Zeroth.AI, to know about the accelerator’s plans in India. Zeroth plans the opening of two new programs — one in Bengaluru and in Tokyo — which will be  run by Sachin Unni and Hajime Hotta respectively.

Unni is a serial entrepreneur and a tech advisor with almost a decade of experience with startups. He is actively involved in the startup ecosystem with angel investments and as a mentor to multiple product companies.

AIM: Tell us about Zeroth.AI and its AI-focussed model

SU: Zeroth is one of Asia’s first accelerator program backing founders working on emerging technologies like AI. We invest capital into visionary founders at the pre-seed and seed stage, and partner with them over three months to build their startups through a team of experienced entrepreneurs, a network of partners and investors, and a community of other founders. In just over a year, Zeroth has invested in over 33 companies from 15 countries. Zeroth companies have attracted top venture investors from around the world, and continue to grow rapidly.

AIM: Tell us about the recent achievements of Zeroth.AI and the impact it has made

  • Zeroth has invested in 33 companies with the central location as Hong Kong. Now with new cohorts, we have almost doubled the number of investments
  • In a little more than a year, we have expanded to two new locations (India, Japan) and in 2019, we will be making our presence stronger in other strategic locations
  • Partnered with DeepCore — a Japanese incubator and fund that is part of the SoftBank group, to use resources on deal sourcing and other collaboration around artificial intelligence

AIM: Can you talk about Zeroth.AI’s India plans and reasons for entering the Indian market?

SU: As mentioned earlier, I have been in the startup ecosystem for many years but unfortunately, I have witnessed a lack of support for multiple product startups. The reason being the need for funds “that have patience”. This is the basis of our accelerator and our idea of supporting early-stage startups in the frontier technology. India as a market or breed grounds for tech talent is of no surprise but we have noticed a major spike in product startups making it out of the garages and aiming for a global reach with the power of data. If given the right launchpad, the product startups from any geo-location can solve problems or innovate the available solutions and India is one of the key locations.

AIM: What kind of startups are you looking for?

SU: Pre-seed and seed stage startups working on frontier technologies such as artificial intelligence, robotics.

AIM: What are the best ways to reach you and convince Zeroth.AI to accelerate an AI-based startup?

SU: A team that can explain the idea and vision in chorus with no separate versions. Even though some error rates are acceptable due to individual perspectives. The Zeroth Partner team consists of ex-founders with experience in products and company exits including fundraising during the lifetime. Everyone understands the grind that’s needed to bring a product from idea to reality, we do not ignore the fact that there will be pivots in strategies or roadmap but if the founders who have reached out to us can explain their idea, approach, research notes on their company then there is no stopping us from selecting the startup regardless of revenue or traction numbers.

The post Zeroth.AI Has Invested In 33 Companies From 15 Countries In Just One Year: Sachin Unni appeared first on Analytics India Magazine.

Trust Is Central To Banking And CustomerXPs Helps Them Leverage It, Says Rivi Varghese

$
0
0

Founded by fintech product experts, CustomerXPs believes that like a human soul, a customer too has a soul – the sum total of what a customer is. “Among all the industries, banking is the only one that captures the customer’s soul,” said Rivi Varghese, CEO, in a candid conversation with Analytics India Magazine. He shared some interesting insights about how their company leverages artificial intelligence, machine learning and analytics, to provide real-time enterprise solutions to global banks, their growth plans and more.

Varghese entered the world of AI and real-time analytics about 18 years ago. A course on statistics during his MBA at IIM Bangalore exposed him to methodologies for massive data crunching which encouraged him to conceive a startup that focused on the same.

“In 2006, we started creating solutions using fuzzy logic and eventually advanced to tools in AI,” he says. They then decided to sharp focus on developing extreme real-time solutions for banks using AI, ML and analytics.

“Among all the industries, banking is the only one that captures the customer’s soul.”

-Rivi Varghese
Analytics India Magazine: Why did you begin with banking as your area of interest?

Rivi Varghese: When we started about 10 years ago, we had already interacted with more than 200 banks in over 20 countries. That’s when I realised that banking is that one unique sector where the entire life of the customer goes through the bank. While every other industry has a fragmented, unidimensional view of its customers, a bank knows everything about you — how much you earn, your marital status, where you travel to, whether you live on rent, even how much fuel your vehicle consumes.

But we found it odd that banks were saying they were in the relationship business, but in reality, they were not quite aware how exactly. We started CustomerXPs to help banks actually discover their customers’ soul, and realised that AI and ML are the way forward. We started developing our product using computational technologies and fuzzy logic engines. We are talking about chatbots now, but our customers went live with it eight years ago.

AIM: How is your approach of providing AI to banks different?

RV: We believe in putting the end state in perspective and then work towards it. We put substantial effort into understanding a client from scratch before developing the AI solution. AI for us is the means to an end and not the end in itself. Almost every bank has made an investment in some way or the other in AI and ML.

The usual practice is to buy costly software licenses, put hundreds of ML engineers and spend millions of dollars buying huge hardware. After years of such costly spends, we still find that banks are saddled with “know why” and the science, but still haven’t figured out the “know how” – as to what is the exact problem they are trying to solve. We decided to tilt this practice.

We sit down with banks to understand their problems in depth before developing solutions that work for them. Mere models and fancy tech jargons will not solve problems – we need a different, real, smarter way of getting things done.

AIM: Does that mean that you offer customised AI solutions to banks?

RV: Let’s first look at the three top hurdles preventing large banks from deploying large-scale AI.

First they don’t have access to absolute real-time data. This is because data warehouse is more or less a failed concept, and everywhere the data is late by hours, days or even months. The data is not structured properly and even if you feed it into the ML models, they don’t work the way they should.

Second if you have non-siloed data, though you could create phenomenal models which help in fraud detection and protection, they still lack the last mile connectivity to influence a transaction is in flight, the flitting moment of truth. Not being able to do this, reduces your AI efforts to just yet another report.  The best way to do this is with an enterprise-wide anti-fraud system, which is connected in real-time to all the decision points in a bank.

Third the sheer complexity of managing huge software. Conventional products are dated and most of the time, with origins from the statistical background. If you really want to do AI, you need to look at what the world is doing today, now and be on the bleeding edge and not have a dated approach.

So, what we have instead is an appliance model. We put all the information into the appliance which contains all the latest software including GPU. GPU does massive volumes of parallel processing and we encapsulate all of this into a box. While the data is incoming real-time, the massive processing can be done easily.

In the appliance model, a bank doesn’t incur huge capital expenditure, as it is a monthly subscription, and there is no need for licenses – think of software and hardware being free. This keeps the vendors on their toes to deliver every month, instead of taking 100% of the licenses up front and leaving the bank to figure out what it intends to do.

“While every other industry has a fragmented, unidimensional view of its customers, a bank knows everything about you.”

-Rivi Varghese
AIM: Interesting. And how you are dealing with the issue of money laundering? Where does analytics fit in?

RV: Banking is about customers’ trust and the moment you lose it, it is gone. Our core product is in enterprise fraud management and financial management which consists of fraud, anti-money laundering and compliance. We use AI and ML to ensure that all of these works better.

When we say AI and ML for money laundering, it means a lot of things. Dated AML systems, generate a lot of alerts. In a differentiated approach, we need to mimic human behaviour and understand what they’re trying to do. The engine learns from it so that the human effort with respect to intervention, alert resolution times, and better hits are addressed. From an investigator’s perspective, instead of looking at many alerts, there are fewer alerts in the first place, so lesser time is spent per alert, and the overall accuracy is increased.

Say the investigator has to enter a phone number search from millions of accounts for interlinkages. With traditional analytics, it takes anywhere from a couple of hours to a few days to process unstructured data, but in our case, our AI engines accelerate this and get this intelligence in seconds. What we are doing is applying ML to work in sync with our core belief and not because AI is trending.

AIM: Tell us about the analytics and AI tools you use

RV: We have a core product which is a real-time decision engine, to analyse, detect and stop fraud within the transaction window itself. The other is a thinking engine in our appliance model. In both cases, we use open source tools and technology such as dashboarding, enhanced in-memory capabilities to constantly improve our system.

AIM: Could you highlight some of the use cases in banks?

RV: Our use cases are centred around fraud detection, compliance, anti-money laundering and generating alerts. For instance, in fraud, it is about the quantum of money saved for the bank.

AI and ML at its core, operate on probability. There type A and type B errors. Making the error rate close to zero but not zero requires substantial levels of optimisation. The other models work on information asymmetry in banking systems such as locating the user based on their place of carrying out a transaction. This metadata is also important for ML parameters.

Every bank is now thinking about AI within its own realm and when we look at banks there are only a few labelled cases. Say out of 100 transactions, only 1 is fraudulent. And, they land up extrapolating it to 99 times. So, if you have a 10% error in your basic model, the error rates get multiplied by 99 times and the whole thing becomes unpredictable.

When you go deeper into serious, large-scale enterprise deployment for banks, these are the hard problems that the banks are trying to crack – and we crack these innovatively.

AIM: How big is your team?

RV: We are now close to 140 FinTech experts who work on building, customising and implementing large-scale banking enterprise systems. We know it is vital to first define each problem precisely and then decide on the right ingredients to solve it. Once we have these sorted, we then decide on hiring. We work with leading System Integration partners to give us the scale.

AIM: What is the roadmap for 2019?

RV: Every day the world wakes up, we will make available to our banks ‘whatever the world has learnt till then’ global intelligence to be used for every decision related to “trust” in a bank.  This is the goal that we are working towards, we expect all our key customers signing up for this vision of ours.

The post Trust Is Central To Banking And CustomerXPs Helps Them Leverage It, Says Rivi Varghese appeared first on Analytics India Magazine.

How This Pentagon Contract Brought About The Fall Of AI Guru Fei-Fei Li

$
0
0

Leading AI ethicist and one of the brightest stars of the AI world, Fei Fei Li’s meteoric rise in the academic and tech world has been upstaged by a bunch of leaked emails which seem to have derailed her career, albeit temporarily. The well-known AI guru whose rise from humble beginnings as a daughter of Chinese immigrants has been well-chronicled came under the scanner after her purported role in the Google Pentagon contract became public, wherein one of the leading AI ethicists, in an email exchange with senior executives sounded more concerned about the Mountain View behemoth’s image rather than the ethical concerns surrounding it.

The expert who was behind Google’s spectacular growth in AI cloud will reportedly leave Google this year and join Stanford back. In fact, the leaked emails brought to the fore how tech giants are trying to build considerable thought leadership as enterprises that are building Humanistic AI and at the same time winning millions of dollars of contracts from Government law enforcement agencies. The leaked emails brought about a dramatic departure of Li, who till now was regarded as the torchbearer on ethics in AI and had been part of several committees.

Here’s how Li played a role in building up Google’s Cloud AI division. The co-director of the Stanford Human-Centered AI (HAI) Institute, Li was the former chief scientist at the AI and ML department of Google Cloud Platform. Over the years, she has worked in many research areas, ranging from computer vision, machine learning, artificial intelligence and even cognitive neuroscience. With all her successful projects and accolades, Li today is known as an AI Cloud Guru bringing humanity to AI.

Here’s how the controversial project dented Dr Li’s impeccable AI image:

Project Maven

The year 2018 has been a rough ride for Google and this has not only affected the company but also its employees. Li is among the affected ones as her life took a serious turn. The revelations about Google’s role in a military image recognition project – Project Maven — led to the resignation of about a dozen employees. And witnessing the situation, the search giant decided to end ties with the program. However, Google didn’t reveal what role it was playing in the project.

Email Leak

Allegedly, the programme used machine learning technology to analyse images captured during the war. Also, Maven has reportedly been used since 2017 in the fight against ISIS. And unfortunately, Li was found attached to the controversy when some of the internal emails were leaked, where she praised the contract but ask her colleagues to not say anything about the AI component of the deal as she feared that the public wouldn’t like the idea of AI in weapons.

Upon the controversy, on September 2018, news came up that Li is going to leave Google to continue her work at Stanford and she will be replaced by Andrew Moore, who is a former Googler and currently is the dean of the computer science department at Carnegie Mellon University. However, the search giant said that replacing Li with Moore has nothing to do with Project Maven controversy.

This is what Google stated at a blog post on 10 September 2018: Dr Andrew Moore, Dean of the School of Computer Science at CMU, will be joining Google Cloud to head up Google Cloud AI at the end of 2018 and advising in the meantime. Dr Fei-Fei Li will be returning to her professorship at Stanford, as originally planned, and she will transition to be an AI/ML Advisor for Google Cloud.

After Amazon and Microsoft, Google is definitely a tough competitor in the cloud space whose unit leans heavily on its AI tools to lure potential clients. And Fei-Fei Li has been playing a vital role at the company since her joining. During her time at Google, she has not only helped the company in creating applications but has also spearheaded the acquisition of Kaggle, a company that organizes more than 2 million data scientists.

Outlook

Fei-Fei Li has come a long way, she did her part. From helping her family to making her own career as an AI pioneer, she is a great example of a rags-to-riches tale. Her tenure at Google might not be a long one, but whatever she has achieved and given to the company during that time is absolutely commendable.

The post How This Pentagon Contract Brought About The Fall Of AI Guru Fei-Fei Li appeared first on Analytics India Magazine.

A Day In The Life Of: Uber Techies Who Work With Big Data

$
0
0

As the year 2018 draws to an end, Analytics India Magazine is starting a new column called ‘A Day In The Life Of’, where we will try to step into the shoes of the awesome techies from various organisations who are working in New Tech areas like big data, data analytics, artificial intelligence and the internet of things, among others.

This week we decided to talk to two women engineers from Uber. Pallavi Rao, Staff Engineer, and Divya B, Senior Software Engineer, spoke to AIM about their life as two dedicated techies who are working in a top company, in an area which is constantly changing and evolving.

Pallavi, whose day begins at 6:30 am, rain or shine, says that her philosophy towards life is that she considers each day a brand new challenge. She applies the same rule for her work at Uber, “Each day has a brand new technical problem to solve and, I enjoy solving problems,” she says cheerfully.

Divya, who efficiently juggles work and play, says:

My mornings are dedicated to self-care (I work out every single day), and my evenings are dedicated to my family.

As Pallavi walks us through her day, she explains, “My job is to chart out a technical roadmap for the team given the various product requirements and keeping in mind the design principles and good engineering practices. My expertise is in big data systems and machine learning. I work across various projects within the group helping and guiding engineers where needed.”

Her team’s job is to build technology that helps Uber marketers reach the right set of audience, at the right time, with the right message. “Ensuring our marketing spend gives us the best ROI is another goal our team helps to achieve. This involves building systems that ingest data from multiple sources, process them churn GBs or data in an hour,” explains Pallavi.

Divya’s key responsibilities, on the other hand, include:

  • Making technology choices
  • Design and architecture of big data systems
  • Building big data and analytics systems
  • Coding/programming
  • Mentoring teams
  • Setting up a process and good coding practices

She is currently working with the AdTech team. Divya explains, “Uber is spending a huge amount of money in advertising across hundreds of channels throughout the world. Technology built out of AdTech team makes Uber spend efficiently using data analytics and machine learning. It involves processing data at scale using big data systems.”

Both Divya and Pallavi are working mothers and one can see the meticulous planning that has gone into their juggling all these roles efficiently.

For example, Pallavi, who eats her meals at the Uber cafeteria, prefers to eat home-cooked food on the weekends. She also plans her free time in such a way, that it covers all aspects of her vibrant personality. “I would say I have an active social life. I am an extrovert, but not really a party animal. I occasionally go for movies, dinner, games or even treks. However, I like my quiet times too, where I can just curl up with a book or just stay indoors and play a board game with my family. I also play basketball with my son or go for walks with my husband.”

Painting a vibrant picture of the Uber team in India, Divya says happily that the best part of her workday includes “lunch talks” with the team. On a serious note, she says:

When I solve a problem in a better way than the existing solution if there was a pre-existing solution, totally makes my day.

When asked about their long-term plans, both of them had a clear idea.

Pallavi says, “My five-year plan is to work on company-wide initiatives and drive the technology charter across various sub-orgs. But for now, I want to make the AdTech team successful by bringing in innovative and cutting-edge technology solutions and setting it up for long-term success. Also, work with rest of the engineering leaders at Uber to make this [Bengaluru] site a world-class technology centre.”

Divya says, “Over the next five years I want to move up in my career and see myself in a position that creates a positive impact across the company. My short-term plan is to work on challenging projects that make a difference to my organisation, and which can span across other companies as well.”

Also see:

The post A Day In The Life Of: Uber Techies Who Work With Big Data appeared first on Analytics India Magazine.

No Digital Ecosystem Is Self-Sufficient; Collaboration Is The Key, Says Dattatri Salagame Of Bosch

$
0
0

Dattatri Salagame Bosch

The year 2018 has been quite monumental for Bosch India, especially with regards to their ‘smart solutions’. In a recent event, Bosch India had especially focussed on the three areas for their growth plan for the coming year.

  1. Energy-efficient solutions and connected products (such as power tools and security systems)
  2. Cross-functional teams as a key to success in the IoT era
  3. Pursuing a “3S” strategy: sensors, software, and services

We caught up with Dattatri Salagame, Head of Digital Business at Robert Bosch Engineering and Business Solutions Ltd, and talked about their India-specific endeavours like, how they are designing and deploying customised digital solutions for the Indian customers, offering end-to-end digital stack solutions and consultation to enterprises, and operating on unique Lead User-Lead Provider strategy where they first use then perfect digital solutions.

Analytics India Magazine: What does Beyond Mobility 2.0 mean to you?

Dattatri Salagame: Bosch is boosting its Beyond Mobility business units with digital solutions that capitalize on growing infrastructure and demand. Over the last few years, business beyond mobility sectors have gained 35 percent and have contributed greatly to the overall group’s turnover. Fueled by the impetus from the Indian government amid structural reforms such as GST, the country is undergoing a rapid transformation that is setting up Bosch’s beyond mobility business units for accelerated growth as it goes from the conventional to digital by transforming businesses beyond mobility.

AIM: How are artificial intelligence and the internet of things a part of this journey?

DS: Bosch worldwide is transforming and helping its customers in their digital journey. Our products earlier too had an element of AI, whether its ABS or ESP all of them had an AI in the software to predict anomaly and react, to ensure safe driving. In today’s digital world we go much beyond this and with our knowledge across the spectrum of Sensor, Software and Services we are one of the biggest IoT enabling companies.

Phantom is a good example of how an earlier energy meter is now smarter with AI. We are not only able to use this to measure energy but to jump start Industry 4.0. AI in phantom is able to bring self-awareness among SMEs about energy, asset utilization, the efficiency of their shop floor etc. Data always existed we just built and AI to analyze and report. Similarly, the Bosch Eyecare fundus camera with its AI in it can detect anomalies in the retina and predict diabetes better than a doctor. The current sensitivity and specificity of the AI are 96 and 98 % accordingly

The Bangalore Center for AI is one of the global centres for Bosch – the team of data scientists, data engineers and visualization engineers work towards innovation. Bosch will strategically build innovative solutions across domains using new technologies with very strong AI backbone.

AIM: Examples of cutting-edge work done by Bosch in this area

DS: Bosch is working in the fields of connected industry, healthcare, agriculture, smart cities with the framework of sensor, software and services. Here are some of the examples:

  • Climo is a Microclimate monitoring Systems and CES Honoree in 2018 is a good example of how we can address the issue of Air Quality with connected products. Climo with its sensor technology is revolutionary and is much more compact with live data streaming
  • Some of the agri sensors we are have been working on can help farmers get live data about the condition of the soil, combine that with weather data, the analytics can help us predict crops and soil conditions better than ever
  • In healthcare, we are working on next-generation non-intrusive devices which can not only improve the lives of patients but increase the accuracy of disease detection. AI in the eye care fundus camera is trained to be more accurate while predicting the occurrence of diabetes while it cans a person retina

AIM: Do disruptive technologies take time to evolve? How so?

Bosch India’s innovation strategy is centred on a systematic approach towards identifying a market, technology trends and place bets to find the next disruptive innovation. New age technologies have the potential to disrupt multiple verticals allowing us to find intersection points, act as amplifiers and experiment with innovative business models. Disruptive technologies do take discipline in Innovation, deep understanding of technology and domain. Resource training and management also plays a key role.

AIM: Do you think India will outsource AI/IoT-related projects?

DS: Collaboration is key in the new digital era. In the whole digital ecosystem, there is never going to be one entity which will have all competencies and hence collaboration is the key. And collaboration can be cross borders too.

AIM: Tell us a little about your DNA Accelerator programme?

DS: Bosch India is leveraging its technical and business capabilities to support and encourage the startup ecosystem. This would lead India to become the hub of startup and innovation on the global map. We want to help the startups to grow by expanding their capabilities. DNA stands for:

  • Discover: It is the starting phase to find startups in relevant defined scope areas which are of interest to Bosch. We schedule pitch days, hackathons and tech events to find relevant startups
  • Nurture: This is an intensive accelerated growth program to help build capabilities and scale up startups from “Lab to Market
  • Align: Defining and structuring the engagement of startups with business units. This includes joint go-to-market approach, overseeing customer partnerships and attracting investments

The post No Digital Ecosystem Is Self-Sufficient; Collaboration Is The Key, Says Dattatri Salagame Of Bosch appeared first on Analytics India Magazine.


Has The AI Train Left The Station? – An Interview With Leading AI Influencer Spiros Margaris 

$
0
0

Spiros Margaris is a leading AI & Fintech influencer in the world. He is a speaker at international FinTech and InsurTech conferences and publishes articles on his innovation proposals and thought leadership. He published an AI white paper, “Machine learning in financial services: Changing the rules of the game,” for the enterprise software vendor SAP. Spiros has more than 25 years of national and international experience in investment management/research and innovation and technology management.

So, we decided to peek into his mind on thoughts around AI in fintech, AI innovations and AI in general. Here’s the complete interview.

AIMAnalytics India Magazine: Why is AI so important for Fintech?

SMSpiros Margaris: Artificial intelligence (AI) and machine learning increasingly play a very important role in all industries that want to compete and survive successfully in a digital world. For the Fintech industry in particular, they allow the industry to provide customers with more personalized and better services, such as, among many other things, fraud detection or KYC (know your customer).

In a world where we all want increasingly more personalized services and solutions, AI and machine learning is the only game in town.

AIM: Do you think investment in AI is still low? Are we getting there?

SM: I think as we see more success stories in the AI space, we’ll see more investments flow into the sector. We’ve seen huge investment and valuation numbers already in 2018, and I foresee that trend to stay on for the foreseeable future. I don’t say that there isn’t a possible hype around AI but that doesn’t mean we’ll see an end to it any time soon. The advantages and promises that AI, machine learning and deep learning bring to industries and countries are still so attractive in the early stage that no one wants to miss out or be left behind.

To be cautious and selective when it comes to AI investment might be prudent advice, but then again, as they say, “No risk, no glory.” It’s a decision every investor has to make for themselves. All I can say, we’re still in the early stages of AI advancements and competition will drive the technology to the next level quicker than most of us can imagine in technology advancements or time frame.

AIM: How do small & mid-sized companies compete in an AI-driven world, where tech giants are taking most of the AI pie?

SM: To be fair, it’s hard for small and mid-sized companies to compete head-to-head with tech giants. The reasons are obviously due to the pure talent manpower and money that tech giants have to deploy into AI research, development and deployment.

That’s the bad part why the smaller companies can’t compete directly with the Amazons, Googles and Apples of this world. However, that doesn’t mean they can’t compete against the tech giants. I advise smaller companies and startups not to play the giant game but to play their own strengths and USPs (unique selling propositions) and partner with AI companies or license AI solutions. In the end, it’s about providing customers with value, and AI plays an increasingly important role but definitely is not the only success factor for any business.

So, to sum up, it’s not only about algorithms that will decide the winners but also great applications and solutions that people want to use.

Consumers don’t really care how much (AI or any) tech is used behind the scene as long as it gets the job done that they desired—consciously or subconsciously—in the first place.

AIM: In a future world dominated by AI innovations, how do countries keep up with the pace?

SM: It sure looks like the US and China are the big elephants in the AI race who commit the most investment and resources to dominate the AI space. Although Europe, India and other countries have woken up to the fact that significant AI investments have to be done to not fall behind and lose their competitive strengths, it’ll be hard to compete against these two elephants in the AI room.

As a positive, I believe we’re still in the early AI game, and key innovations can come from anywhere regardless of the money spent, but the likelihood to develop the AI competitive edge of course increases with resources committed to the space.

AIM: What can we professionals do to have a greater collaboration in AI between countries?

SM: I strongly believe that collaboration between countries and professionals will benefit all of us. Governments should encourage their AI scientists, entrepreneurs and companies to work more closely together with other countries and experts to develop and find new AI applications. Of course, many countries try not to share their AI competitive edge and know-how freely, but, as in the real world, collaboration or teamwork leads to more likelihood of successful solutions.

AIM: What do you think about AI in India? In other words, how have your perceptions about AI in India shaped over time?

SM: India still has the image of providing back-office services to the world, but I think that’ll change as its government has clearly understood the consequences of AI to the back-office business model, its society and workforce. I strongly believe that India is a country that can adjust to the new AI world and will increasingly prepare its society for the challenges ahead and skill its people and workers accordingly. It would be a mistake to count India out in the AI race; to the contrary, I believe the country is on the right track to play a leading role in how AI is implemented in the future.

We’re still early in the AI race, regardless of what is implied in the media.

AIM: What can India do more in this field?

SM: India, like any country, should put as many resources as possible into creating an AI-friendly environment. That means the government must put a lot of resources into educating its people and that starts from the small children to the universities. Furthermore, regulation has to encourage companies and startups to use technologies such as AI and machine learning to create future companies that could become global leaders.

That is easier said than done because we must consider, among other things, privacy and ethical issues when we push AI forward. Not everything that is possible should be done without thinking very carefully about the consequences to society and the world. And finally, the government support has to be maintained regardless of how hot or cold the AI environment is. In other words, a long-term plan needs to be put on the political agenda to avoid falling behind other countries so much that it might feel like the last train has left the station.

The post Has The AI Train Left The Station? – An Interview With Leading AI Influencer Spiros Margaris  appeared first on Analytics India Magazine.

IT Minister Ravi Shankar Prasad Turned The Spotlight On AI, The Future Of Tech

$
0
0

Ever since the NDA-led BJP government came into power in 2014, they have showcased a marked preference in taking proactive steps towards helping the private sector as well as integrating emerging technologies into the public sector. In the last couple of years, the Indian Government has taken several initiatives like Atal Innovation Mission’s Atal Tinkering Labs, Atal Incubation Centers, Scale-up support to Established Incubators, etc. They have also established the Artificial Intelligence Task Force, set up under the Ministry of Commerce & Industry task force, which underscores how the country is preparing for the upcoming Industrial Revolution 4.0 and the resulting economic transformation.

Ravi Shankar Prasad, Union Minister for Electronics and Information Technology, has played a key role in this long journey. From promoting New Tech at events to reaching out to other nations to join hands with India, Prasad has played a pivotal role in promoting the development and integration of emerging technologies and fostering a vibrant ecosystem of new tech startups that are actively contributing to India’s technological and economic development.

We List Down 7 Times Union Minister Ravi Shankar Prasad Spoke About AI In Public:

Indian IT companies don’t steal jobs — they create them

Addressing a gathering of IT leaders last year in April, Prasad told that Indian IT companies have taken the world by storm with their software skills; they don’t steal jobs anywhere in the world but create them.

“Indian IT companies do not steal jobs but create jobs, be it in the US or any other country. We are proud of their contribution in America and the world over.”

After a brainstorming session with industry leaders

Last year in June, Prasad met some of the industry leaders including Rishad Premji of Wipro, Kavin Bharti Mittal of Hike, Rajan Anandan of Google and Vanitha Narayanan of IBM to discuss a roadmap for the $1-trillion digital economy. After more than two hours long brainstorming, the minister said that they have discussed new areas to conquer such as e-commerce, artificial intelligence and the internet of things.

“We will look at developing a framework for developing startup clusters policy,” Prasad said.

At a gathering in IIT Delhi

Recently, the Minister of Electronics and Information Technology, Prasad addressed a gathering at IIT Delhi about privacy and data. Speaking about data, he said that it is high time that data must be made available for research and growth so that India can lead the world in the digital revolution.

“India missed the impact of the industrial revolution for a variety of reasons. We don’t wish to lose the digital revolution. On the contrary, we in India want to become the leaders in the digital revolution,” Prasad told the gathering.

In addition, he also spoke about the advent of Artificial Intelligence in India. He emphasised that the nation is witnessing tremendous a start-up revolution, and artificial intelligence, data mining, and the Internet of things will not only empower the startups but the nation as well.

“I clearly foresee India is going to become a big centre of data research,” he said.

Leverage AI for Healthcare

In April this years at ASSOCHAM ICT Start-Ups Award 2018, Prasad asked Indian startups to make the most of the new-age technologies such as Artificial Intelligence and Machine-to-Machine communication and create innovative solutions in sectors like healthcare and education; especially for the rural areas.

“How can you leverage Artificial Intelligence, Machine to Machine communication and Internet of Things for healthcare, agriculture and education in rural areas…that is the challenge for you,” Prasad said at the event.

Approached Russia to leverage the potential of India’s digital economy

In October 2018, India invited Russia to leverage the potential of the nation’s booming digital economy. According to IT minister Prasad, this collaboration between the nations will not only help areas like Artificial Intelligence (AI) and e-health to spur momentum but will also help both the nation’s economy.

“Indo-Russian relationship offers a great opportunity to work together. Russia has outstanding people, innovators. India is also a land of innovators and human resource including young IT graduate. If we have this kind of collaboration and cooperation, Indo-Russian relationship will acquire a technology momentum of its own,” the minister said.

During a trip to Silicon Valley

This year in August, the Indian IT minister, Prasad was on a trip to the US and at a seminar organised by US-India Strategic Partnership Forum and the Consulate General of India, he met some of the most renowned tech leaders of Silicon Valley including Google, Wipro, Oracle, and GE top brass, and asked them to forge strong partnership with India.

“India offers an improved investment climate, growing market, a large pool of talent and improved profitable destination for investment,” said Prasad.

The Union Minister also visited Google campus and met India-born Sundar Pichai, CEO of Google, and they discussed Google’s plans for empowering India in areas of connectivity, Indic languages, AI solutions and capacity building of startups and SMEs.

Committees for AI research

As the allocation for the Digital India programme almost doubled in the Union Budget 2018, Prasad emphasised on the use of artificial intelligence (AI) and electronic manufacturing. The IT minister had a high-level meeting with around 50 participants including directors of IITs, NASSCOM and private entities and they discussed the roadmap to promote AI in India.

Prasad set up four committees that will carry out research on AI.

“These committees will research and work on the development of citizen-centric use cases; data platform; skilling, re-skilling, research and development; and legal regulatory, ethical and cyber-security,” said Prasad.

The post IT Minister Ravi Shankar Prasad Turned The Spotlight On AI, The Future Of Tech appeared first on Analytics India Magazine.

A Day In The Life Of: Meet PayPal’s Techie Who Knows How To Strike The Ideal Work Life Balance

$
0
0

 

For this week’s ‘A Day in the Life of‘ series we introduce you to Abhirami Mahadevan, who works as a release manager at PayPal’s global release engineering team in Bengaluru.

Abhirami, who is working-mother, is a quintessential example of a modern IT professional, who juggles her work and family seamlessly.  

For Mahadevan, a typical weekday begins as early as six in the morning and can stretch beyond six pm depending on the host of meetings that she needs to attend at her workplace.

A career-oriented woman, Mahadevan doesn’t mind the meeting as long as they are productive and says that she counts on the weekend to make up for the lost hours of sleep. “Our working hours are from nine am to six pm but based on the support model that we operate in with the other regions, it gets spilt over occasionally due meetings and collaboration. Like everyone I just wait for the weekend to sneak in those extra hours of sleep,” she says gleefully.

But for working mothers, busy schedules like that of Mahadevan’s can be a challenge especially when it comes to managing the work-family balance. Often, here is where the role of an organisational and team support comes in handy in order to maintain that work-life equilibrium. For  Mahadevan, PayPal and her team have been a strong pillar of support and because of this, she says that she is able to manage her work without letting it affect her family time. “I am able to integrate my work and life with the flexibility that PayPal offers and the support I receive from my team, also ” she adds.

Currently, she is working on PayPal’s Technology Platform Experience (TPX) organisation.”I am also expanding my role as a scrum master for our release engineering tools development team,” she added.

Giving us a brief insight into her role as a release manager in PayPal, Mahadevan says that her job requires her to plan, coordinate and manage releases across the enterprise and teams for multiple applications, “My role involves coordination across multiple teams, communicating to stakeholders, keeping track of a myriad of dependencies, and monitoring the status via tools, mails ensuring release process is followed so that new and enhanced IT services required by the business are quality tested, verified and approved before it goes live thereby protecting the integrity of existing services,” she explains.

Amidst her busy day at work, one cannot help but ask her routine of unwinding to which she says, “I often attend the yoga or aerobics classes that PayPal offers and apart from this I do some artwork when I find the time,’ Abhirami says.

At her office, she is also working on multiple projects other than her regular work. For instance, Mahadevan is part of PayPal’s new initiative called Unity, an affinity group of women and men working together to create more opportunities for women at PayPal. She is also a member of the company’s Working for the parents’ group. To the question of her company nurturing her talent she says, “I have been provided with the right opportunity that would tap into my potential of management and collaboration which I am great at.”

In the long run, Mahadevan hopes to fine-tune her managerial skills while at Paypal. When quizzed about her five-year plan she says, “I see myself taking on more managerial responsibilities in the next few years where I’ll be able to use my skills to support and influence others. I’ve been lucky enough to work with some amazing managers, and so developing into a great manager myself is something I’m really excited about and look forward to,” she concludes.

The post A Day In The Life Of: Meet PayPal’s Techie Who Knows How To Strike The Ideal Work Life Balance appeared first on Analytics India Magazine.

Top Indian Quora Writers To Follow For Data Science

$
0
0

 

Data science is that new currency and asset which has taken over the technology sector across the globe. Data and data science practices have already significantly impacted the key aspects of business. Quora is an excellent means to assist people to answer their questions and provide inputs in the areas of technology, academics, project management, new trends in business and many more.

In this article, we list down 10 Indian Quora writers Data Scientist aspirants can follow to bring the latest insights and data trends to their followers.

Lalit Patel

Holding a PhD in Physics, Lalit Patel also holds an MBA. He is an experienced person in Machine Learning nanotechnology. Presently working as Computer Audit Analyst
State of Florida – DoR Mr.Patel and has worked in many reputed companies and taught at IIT Delhi for about 5 years and has two million answer views and is the top writer for the year 2018.

Shweta Doshi

Doshi is the Co-Founder of GreyAtom, Data Science Immersive Learning school. She believes that technology is often a tough subject for colleges and universities since the curriculum is frequently outdated and not regulated to the expectations of industry. GreyAtom was founded to bridge this gap, and make tech education more relevant as per the industry’s standards. She has 1m answer views and is very active in answering people about the emerging trends.

Sudalai Rajkumar S

An alumnus of IIM Bengaluru, Sudalai Rajkumar is presently working as a lead data scientist at H2O.ai. He is Data Scientist with vast experience in solving many real-world business problems across diverse domains. He is also a Kaggle Grandmaster in competitions and Kernels section. Some of his top achievements include being ranked topmost in recent hackathons platform and a top solver of Crowdanalytix platform. He has a 390.2k answer views on Quora.

Adarsh Iyer

Graduating from the University of Mumbai in electronics and communications engineering Adarsh is a data-driven professional. He is a specialist with vast experience in a range of  IT domain and has worked on Supply chain & Logistics, E-commerce and Food Processing Machinery Domains. He loves answering questions on Quora page and has 327.6k answer views.

Abhinav Krishnan

A graduate of College of Engineering Guindy, Krishnan has 5+ years of experience in Data Science, Python, Excel and SAS.  He is a Data Scientist with great experience in solving many real-world business problems across heterogeneous domains and has 305k answer views on Quora.

Ratnakar Pandey

An MBA from Indian School of Business (ISB), Ratnakar Pandey is India Head Analytics and Data Science at Kabbage Inc. He holds a Blog on Data Science using Python  RP’s Blog on data science. He has 15+ years of senior management in analytics and data science across Banking, Financial, Fintech, Retail, Technology and Healthcare verticals. Led global teams across 10+ countries and has 869.8k answer views on Quora.

Vidita Mehta

Vidita Mehta has completed her fintech course from Imarticus and graduated from Mumbai. She discusses the latest technology over Quora which she finds to be a great platform for discussion and sharing opinions and holds 594.7k answer views

Akash Dugam

He is a data science consultant who helps Data Scientist aspirants with tons of knowledge that he has gathered in the analytical industry. His area of interests include Artificial Intelligence, IoT and machine learning. He holds 1.6 million answer views

 

 

 

The post Top Indian Quora Writers To Follow For Data Science appeared first on Analytics India Magazine.

Evelyn Berezin Was The Unsung Hero Behind World’s First Word Processor

$
0
0
Source: blogthinkbig

Remember those late night toils in completing the final copy of your thesis or final draft of your first novel and the typewriter goes numb and needs reinking or the ribbon needs to be replaced.

No. No one can remember this because modern humans make lists, write notaries and complete the weekend assignments using MS Word or Google Docs.

These word processors eliminate the drudgery of discarded papers and last minute trips to buy ribbon. Spell checks, undoing and redoing have become so common that it has made the job of a writer quite easy at least until one runs out of ideas.

Modern-day word processors, which are part of every personal computer, are by-products of innovators of the golden age of computing. The motivation would have been to make the job of a secretary easy or a big firm’s profitable outing, but the word processors ended up being efficient.

Evelyn Berezin designed some of the earliest computer systems for banks, airlines, stock exchanges and horse tracks and helped usher in a technological revolution in the form of first mass-produced word processor.

Berezin formed Redactron in 1969 and soon settled on a word processor as its flagship product. She also had designed many other innovative computing systems and helmed Redactron Corporation, a company that changed the life of a secretary forever.

Life As A Computing Pioneer

Her fascination with science began with reading science fiction periodicals left behind her brother. Born in 1925, Berezin earned a BA in physics at NYU before working throughout the 1950s and 1960s designing early computing systems; the period which saw a great surge in innovations in the field of computing with the invention of transistors.

Berezin’s other most significant contribution was to the field of aviation. Her airline reservations system for United Airlines served 60 cities throughout the United States with one-second response time and with no central system failures in 11 years of operation, which is outstanding even by modern day standards.

By the late 1960s, IBM had developed the IBM MT/ST (Magnetic Tape/Selectric Typewriter). This was a model of the IBM Selectric typewriter from the earlier part of this decade, but built into its own desk, and integrated with a magnetic tape recording and playback facilities, with controls and a bank of electrical relays. But the device was bulky, hard to use and expensive.

This device allowed rewriting text that had been written on another tape and you could collaborate (send the tape to another person for them to edit or make a copy). It was a revolution for the word processing industry. In 1969 the tapes were replaced by magnetic cards. These memory cards were introduced in the side of an extra device that accompanied the MT/ST, able to read and record the work.

Whereas, the first version of Berezin’s word-processing machine had no screen and was the size of a modern washer or dryer, though later iterations brought displays and other improvements to the design.  

She sold 10,000 of these machines designed by her very own Redactron, over a period of seven years. The design was actually meant to be dealt with by Intel, which was newly formed and it needed time to design chips. These long durations squeezed every penny out of Redactron forcing them to design their own machines.

Priced at $8,000, Redactron competed with the likes of IBM and was hugely profitable. Redactron was later acquired by another computing giant of that period, Burroughs Corporation.

Following which, Berezin began her stint as a venture capitalist.  She is a physicist, computer engineer and entrepreneur but most importantly she is a visionary who has set high benchmarks for herself as well as for her contemporaries

There are more than sixty-word processing systems now and Berezin occupies a very special place in transforming digital communication forever. Berezin was inducted into the Long Island Technology Hall of Fame in 2006 and the Women in Technology Hall of Fame in Los Angeles in 2011. She belongs to a handful of elites and pioneers whose contributions haven’t got the recognition they deserved.

 

The post Evelyn Berezin Was The Unsung Hero Behind World’s First Word Processor appeared first on Analytics India Magazine.

Viewing all 1184 articles
Browse latest View live