Featured

Recent posts

Why vote for Google Cloud Platform?

As more and more businesses decide that they don't want to manage infrastructure and IT operations by themselves, the ‘Cloud First’ approach is becoming a norm rather than a choice. With the leading players in this domain being Amazon Web Services (AWS) and Azure, Google is often criticized for the lack of institutional will to create a robust cloud platform that enterprises will be enticed to adopt. However, this outlook is changing now. Google is not just investing heavily in their cloud platform, but also bringing the enterprise mindset and ecosystem into the Google Cloud Platform (GCP). A few services that make the Google Cloud Platform (GCP) unique are: BigQuery A fully managed, petabyte scale and low-cost enterprise data warehouse for analytics. Simply speaking, BigQuery is the public interface for one of Google’s most successful internal tools called “Dremel.” It can scan millions of rows without an index in a matter of seconds and is an excellent alternative to Hive on Hadoop or Spark SQL. However, limited integration capability with some of the industry standard business intelligence tools and the lack of support for user-defined precision alternatives are some of BigQuery’s major shortcomings. Of course, like any data warehouse, BigQuery cannot predict the amount of data that could be stored and processed in the future. Our solution experts at Nagarro however, generally try to solve this problem with calculation tools to optimize usage and predict costs relying on long-term storage, query optimization or usage, data expiration policies, partitioning, and sharding. Cloud Dataflow A managed service for developing and executing a wide range of data processing patterns using a unified programming model. Big data applications often face challenges such as lack of resource capacity, a wide range of data processing patterns, and managing multiple services for various data processing pipelines. This leads to both performance and operational issues. Dataflow provides a simplified and developer-friendly way to approach this issue. While the Cloud Dataflow runner service abstracts the application from low level details, the Dataflow SDK allows developers to create data processing jobs and focus on logical composition rather than physical orchestration. The designing of pipelines is important as flaws in such designs can lead to a huge increase in reads and writes, which will eventually lead to performance degradation. At Nagarro, we recently recommended Cloud Dataflow together with BigQuery to a customer where requirements were centered around the IoT space – with more real-time event processing and stream processing with real-time data. Cloud Dataproc A fully managed Hadoop and Spark offering. When the application is tightly coupled with some big data tools like Hadoop and Spark, Dataproc provides an easy alternative by making the entire process easy to start, run and manage. Dataproc also abstracts the application from hardware and software installations. Ease of use, together with quick creation and scaling capabilities, make Cloud Dataproc ideal for many business scenarios. Although a built-in support for on-demand clusters is not available, these clusters can be managed by using the software development kit (SDK) and REST API provided by Google Cloud. Also, a cluster is acquired as part of job creation in Google, unlike other MapReduce solutions. Factors like high granularity of billing (per-minute), separation of storage and compute, and affordability of services in comparison with competitors make Dataproc and its service offerings unique. Machine Learning & Artificial Intelligence One of the biggest challenges faced by the industry in the machine learning domain is training larger models. This often requires expensive special purpose hardware. With lack of technical capability to build and train custom models, it becomes the biggest hindrance in the usage of machine learning (ML) and artificial intelligence (AI) by enterprises. Google solves this problem by providing pre-trained models along with a huge list of supported APIs such as Google Cloud Vision, Google Translate, Cloud Speech, and Cloud Natural Language. Moreover, Google Translate API and Google Prediction API are simplified services that make learning and adoption easy. Many customers we have worked with at Nagarro are also able to make the best use of Googles AI/ML services based on TensorFlow for predictive analytics and deep learning. Infrastructure as a Service (IaaS) While the world is talking about “DevOps”, Google is already realizing the “NoOps” dream. Google says that the instance boot time of Google Compute Engine (GCE) is considerably better than most of its competitors. It also claims to be the first platform to deliver its cloud infrastructure on Skylake (Intel’s latest processor). Another interesting feature of GCE Load Balancers is that they don’t need any pre-warming like AWS. GCE also brings in the concept of “pay per use” based on sub-hour billing. Here, the compute engine instances are charged in one-minute increments (with a ten-minute minimum chunk). Thus, users don't pay for compute minutes that are not consumed by their application. Our observation is that Google has emerged as a strong and serious cloud player and has reinforced the belief that AI is the future of cloud. At Nagarro, we help our clients choose a computing platform based on an exhaustive checklist of functional and non-functional requirements. We also provide solutions keeping in mind the ever-changing market trends. For more information on our offerings in this area, you can reach us at info@nagarro.com.

Enable omni-channel physician engagement

We live in an incredibly progressive era, where unprecedented medical advancements are leading to availability of new treatment options every single day. Recent statistics indicate that 22 new drugs were approved by the US FDA in 2016* and more than 500 new brands were launched in the past year**.

Express Delivery: MVP for last mile connectivity in 7 days

1 idea, 2 databases, 3 technologies, 4 personas, 5 integrations, 6 screens, 8 workflows, and 9+ APIs – all delivered in 7 days. This was no ordinary project – it was a long-standing transportation problem handled through technology in flat 7 days. The vision It all started with a vision to improve last-mile connectivity in the Delhi NCR (National Capital Region). Jangid Motors, a fast-growing electric vehicle company based out of Gurugram, India, contacted Nagarro to partner them in solving the complex challenge riders of public transport face in the last leg of their journey from the metro stations (intra-city rail transport system) to their destination. Through this initiative Jangid Motors also wanted to streamline the completely unorganized e-vehicle sector in this region. As a first step, Nagarro organized a 36-hour Hackathon to generate ideas on solutions for last mile connectivity. With the ideas from the Hackathon as a base, Nagarro embarked on a path-breaking 7 day assignment with Jangid Motors. The partnership After several rounds of closed room discussions on functional, non-functional flows and use cases between the Nagarro and Jangid Motors teams, a proposal was presented to the Delhi Metro Rail Corporation (DMRC). Based on this meeting, Jangid Motors was awarded exclusive rights to operate electric vehicles at several metro stations in the NCR region. There were only 50 days to go-live out of which several days were required for field testing with a real production ready product. Meanwhile DMRC also wanted to see a running product demo before that. The real test: Minimum Viable Product (MVP) in 7 days Nagarro used their Enterprise Agile approach to perfection here. The chosen set of technologies were MySQL, MongoDB for database, Android native for the mobile app, NodeJS for server side programming and HTML 5 and JavaScript for client side programming. A team came together to carry out a fast-paced, no-break, 7 day marathon to MVP delivery. A quick synopsis of this 7 day express delivery goes somewhat like this. Day 1 – Conceptualization: The idea was discussed with the development team to get them up to speed with the concept. The day was spent in conceptualizing the overall solution together and developing a high level design. Day 2 – Design: Digging complexities of the requirements and addressing them in the application design was the focus on this day. End to end technology stack, application architecture and database design was the tangible outcome. Day 3 – Coding: Coding was started for the Android app and backend APIs. Database scripts were completed by the end of the day. Day 4 – Ops: Basic deployment ops over cloud along with integration preps for third-party services were setup and unit tested. Day 5 – Integration: Continuing work through the weekend, the Android app integration with the APIs started for the first time. It was a tough, long day for the team ironing out integration issues. Day 6 – Testing: Integration was a rough beast, took another half a day, before it was finally tamed. The real end-to-end testing of business workflows was done for the first time. Testing didn’t go as expected, but the solution worked end to end. Day 7 – Staging: Actual field testing was done – moving around the city with the Android app on a device. Server coding on the cloud was also carried out at this stage. Some minor glitches here and there, but by the end of day, there was a working code without any show-stoppers. Code checked and ready to be delivered to the customer. The 7 day delivery was complete! +1 Day – Beta: Jangid Motors field tested the application with real EVs, giving Nagarro some GPS related bugs at the end of the day. +2 Day – Production: After fixing field testing bugs, the code was pushed to the production environment on MS Azure. +3 Day – Demo: This express delivery was seeing the light of the day – JM’s demo to DMRC. The demo ran smoothly on the production environment leaving the DMRC team impressed. The result The result was a mobile app that provides hassle free and affordable last mile connectivity to commuters through electric vehicles (e-rickshaws). A rider can use their metro card to check-in to any nearest e-rickshaw and book a ride. The fare and distance is calculated on check-out (through the app) and the passenger is charged based on the rate card. The mobile app opens the possibility of setting up cashless transactions and wallet integration. It also allows e-vehicles to be managed by an automated fleet management system. The future The next phase of development involves implementing more interesting features such as payments via metro card, analytics dashboard, real-time EV tracking, etc. And of course there is a matter of solving the issue of first mile connectivity now.

A quick guide to disaster recovery with AWS

In today's fast-paced volatile world anything that can go wrong, will go wrong. A service interrupting event, network outage, power outage, a bug in the latest application push or sometimes even a natural disaster can catch us off-guard at any time and disrupt our business continuity. While there is no way to prevent all disasters, we can be prepared for them by formulating a robust data recovery plan. The Amazon Web Services (AWS) solution According to Amazon Web Services ‘Disaster recovery is a continual process of analysis and improvement, as business and systems evolve. For each business service, customers need to establish an acceptable recovery point and time, and then build an appropriate DR solution.’ AWS helps customers develop a robust, cost-effective, targeted and well-tested disaster recovery plan. It all begins with business impact analysis to work out two key metrics: Recovery Time Objective (RTO): The maximum acceptable length of time that your application can be offline. Recovery Point Objective (RPO): The maximum acceptable length of time during which data might be lost from your application due to a major incident. This metric will vary based on the ways that the data is used. For example, frequently modified user data could have an RPO of just a few minutes, whereas less critical, infrequently modified data could have an RPO of several hours. This metric describes the length of time only and does not address the amount or quality of the data lost. The shorter the RTO and RPO, the more your application will cost to run. Given these metrics, AWS offers 4 basic techniques for back-up and disaster recovery. 1. Back-up and Recovery One approach is The Cold Method, a traditional model that requires periodically backing up the systems on tape and sending them off-site. With AWS, there are several mechanisms available to achieve this and the choice would be purely based on the pre-decided RPO. Using reliable services like S3, Direct Connect and Import Export, backup solutions can be defined using the reliable network connectivity AWS provides. The overheads of using this approach includes defining retention times, ensuring security of archives and regularly testing the backups. The RPO in this case will be huge and there will be a downtime  before for restoration. 2. Pilot Light The second potential approach is Pilot Light where the data is mirrored, the environment is scripted as a template and a minimal version of the system is always running in a different region. The idea comes from the gas heater analogy where a small flame is always on and can quickly ignite the entire furnace, when required. The core element of the system, usually the database, is always activated for data replication and for the other layers, server images are created and updated periodically. In case of a disaster, the environment can be built out and scaled using the backed-up amazon machine images (AMIs) around the pilot light. The Pilot Light approach reduces the RTO and RPO and provides the ease of just turning on the resources – the recovery takes just a few minutes. Amazon Cloud formation can be used to automate provisioning of services. Apart from relatively higher costs, the overheads include testing, patching and configuring the services to ensure they match the production environment. 3. Warm Standby The next level of the Pilot Light approach is the Warm Standby. It ensures that the recovery time is reduced to almost zero by always running a scaled down version of a fully functional environment. During the recovery phase, in case the production system fails, the standby infrastructure is scaled up to be in line with the production environment and DNS records are updated to route all traffic to the new AWS environment. Essentially a smaller version of our full production environment is being run here so this approach reduces RTO and RPO but incurs higher costs as services are running 24/7. 4. Multi-Site As the optimum technique in backup and disaster recovery, Multi-Site duplicates the environment and there is always another environment serving live traffic running in a different region in an active-active configuration. DNS weighting is used to distribute incoming requests to both sites. Automated failover is configured to re-route traffic away from the affected site in the event of an outage. Once the primary site has been restored to a working state, we need to follow the failback process. Depending on the disaster recovery strategy, this means reversing the flow of data replication so that any updates received while the primary site was down can be replicated without loss of data. The four techniques discussed above are not mutually exclusive and can be used in combination depending on the RPO and RTO metrics. Irrespective of the disaster recovery requirements, a well-designed and tested disaster recovery plan must ensure that the impact on a business's bottom line will be minimal when disaster hits.  It has to be an amalgamation of a robust, flexible, and cost-efficient selection of strategies that can be used to build or enhance the solution that is right for the business.

Intelligent Apps: The future of mobile solutions?

When I hear the question—what is future of “smartphones”, I humbly say that smartphones are (still) the future. With the Internet of Things and other advanced technologies gaining prominence, many assume that smartphone use will gradually decrease. I believe that the smartphone is here to stay. Simply because mobile orchestration of micro moments (to enable user actions and responses) makes this device irreplaceable in the near future by any “thing”. For businesses, a smartphone is the pivot of several intermingled virtual experiences of a user in real time. Organizations which can capitalize on this pivot and solve people's needs are going to define the future. Intelligent apps aid in exactly this, and help businesses connect with users at all times, leading to enhanced value and greater business results. Intelligent mobile apps Gartner named Intelligent Apps as one of the Top 10 Strategic Technology Trends of 2017. Intelligent apps are being used in domains such as productivity, beauty, education, sales, manufacturing, entertainment and mark the next level of development in mobility solutions. Take for example a daily learning app meant for high school students. It improves learning by sending the user a contextual message. The app gets visual cues on recognizing emotions via the camera. So, if the user looks confused when studying the basics of calculus, the app may suggest: “You may like to revise Calculus 101, before taking the next test”. Another example could be that of a left-handed mobile user. The application ascertains that the user is left handed and moves the hamburger menu from the top-right to the top-left of the screen for easy access. An intelligent app learns the behavioral, contextual and emotional patterns of a user in real time and provides a personalized experience in every successive session. Every time an input is made, the mobile app tailors a personalized response. Anatomy of an Intelligent mobile app Elimination of outdated keyboard inputs: Intelligent use of gestures, speech, and biometrics at the right moments can enable a user to operate apps without keyboards. Use of Cognitive API: Mimicking human functions like speech, cognitive API can convert text into speech, recognize familiar faces and emotions, and give advice to users. Continuous personalization of UX: Personalization of user experience should not only be a design time but a run time activity. A user’s pattern of operating the app changes with time. The mobile app should change its behavior continuously to be in sync. Prediction of behavior based on user activity: Frequent in-depth analysis with respect to the user’s activity is critical as this would lay the foundation of app behavior in the current and the next stage. Frameworks required for Intelligent Mobile Apps The app design should have the provision and scope to augment the user journey rather than changing it altogether. To start with, these self-organized apps can have small additions, deletions, and discoveries based on built-in intelligence and self-learning algorithms. For such apps, the age of the app and lifetime of the app user will drive their maturity and success. To begin with, app designers and product owners can think of the following frameworks to support these apps: Framework to Know Your Users’ Activity (KYUA). Framework to acknowledge the patterns of user activity and provide support for prescriptive design changes. Framework to take the app to the next level of engagement and user satisfaction by rolling off quick releases. To help businesses achieve more, Nagarro works on enabling such mobility frameworks, and delivers continuously improved, highly intelligent and usable apps for the future. Watch this space for a white paper on these frameworks and their utilities in the intelligent app space. Read more about our mobility solutions or contact us at info@nagarro.com for further information. 

Hotspot Innovation: Why IT is part of the solution

The call for innovation in politics, business and society sounds almost like a threat nowadays. No doubt massive structural changes are underway – keyword “business disruption”. The best companies innovate and transform proactively. However, how does one tackle such a great, interdisciplinary, vague venture? An approach can be found in IT, the field that has in many ways, initiated the current transformation movement. Let’s not forget: Industry 4.0, Analytics, Virtual Reality – the new visions of the future would not be possible without the corresponding technology concepts. Software, cloud infrastructure, platforms and services are the glue that hold the business model of tomorrow together. Hence, the underlying IT concepts are also part of the solution. Combining innovative design with IT There are endless possibilities for innovation. To create new ideas, one needs a collaboration platform. To implement them, a concept and resources. In large corporations, it is quite common to use dedicated innovation teams and future labs. However, most organizations lack resources, frameworks, and partners for such a task. Nagarro offers exactly that and assists businesses in designing their transformation process. To this end, Nagarro integrates the necessary innovation expertise with industry experience and an innovation methodology toolkit, which comes into action through workshops and lectures. We understand innovation design as a logical extension to IT consulting. Connecting small steps CxOs have been familiar with the key data required for business transformation for long. Nevertheless, projects lack innovation. A practical solution is to bring small steps and projects into context with a future vision. This objective is implemented through collaboration with specialists from several domains, for instance in dedicated innovation workshops. After all, as Albert Einstein once said “Problems cannot be solved with the same mind-set that created them.” Click here for quick tips that you need for innovation and business transformation.

A CIO's guide to managing exponential growth in life-sciences organisations

A CIO's guide to managing exponential growth in life-sciences organisations Categories: CIO, Playbook, Life-sciences, HealthCare 8 min read Ashish Gupta Senior Director, Life-sciences Purav Gandhi Director, Life-sciences Managing an IT ecosystem is often deemed as a challenge only for large pharmaceuticals and biotech organizations. But, our experience in working with life-sciences organizations over the years indicates that IT challenges start at a nascent stage itself. In fact, young pharmaceuticals face unique IT challenges as they move through a rapid growth cycle. In our view these organizations are akin to pediatric patients, where their bodies have the complexity of a mature adult, but with greater sensitivity. Thus, the right foundation sets the baseline for growth in the future. 6 Challenges faced by young life-sciences organizations As a CIO, you are likely to encounter six unique challenges while setting-up a foundation in place for your organization. Complexity Developing an IT infrastructure like big pharmaceuticals in terms of maturity and complexity, but only at a smaller scale and with a relatively small budget. Scalability Ensuring scalability from the foundation stage, improving replicable processes and accelerating maturity. Speed of Execution Ensuring rapid implementation with small teams, as there is limited time to develop and scale internal teams. Optimized Vendor Ecosystem Simplifying the vendor environment, to avoid dealing with multiple vendors and ensure compatibility across systems. Seamless Collaboration Ensuring an open architecture with an ability to seamlessly collaborate for research, development and commercialization purposes. Total Cost of Ownership Managing the Total Cost of Ownership (TCO), as there is a limited capacity to make upfront investments in infrastructure. Even at a smaller scale, the number of touch-points with scientific, manufacturing and supply chain vendors is large. For example, the Research & Development department must manage a network of scientists and epidemiologists either internally or externally (or sometimes both), interface with a network of contract research organizations, manage the data and evidence that is flowing in from various internal and external sources, and then ensure an appropriate support system for developing materials for submission to the regulators. In parallel, a process needs to evolve to ensure launch readiness for the product in terms of launch planning, sales force readiness, customer relationship management systems, etc. Key elements to ensure fundamental readiness at a small scale At an early stage, we have seen that life-sciences organizations are limited by budgets that can be allocated towards IT initiatives, as well as bandwidth in terms of getting involved in designing architecture and processes. However, even with constraints, it is critical for IT to ensure fundamental readiness of the company for eventual rapid growth: Information Management Data, evidence and insights are the biggest assets and need to be managed in a careful manner so that it can be used overtime. Traceability Systems need to be traceable from a regulatory as well as management standpoint. Scalability & Replicability While initial data and other IT requirements are less intensive, Life Sciences organizations need to scale very quickly after approvals; hence requiring scalability & replicability. Collaboration Ability to share information internally across stakeholders, as well as externally with vendors and collaborators will define agility and speed of how a company scales. Security While all above principles help differentiation, security is of paramount importance at each stage. Organizations need to transform and adapt in order to survive and manage eventual rapid growth. They need the right expertise to become future-ready by leveraging technology and innovation. Read about our capabilities across the life-sciences and healthcare value chain. IT requirements evolve through a maturity curve Our research into life-sciences organizations journey has revealed that IT requirements evolve through a maturity curve as the organization grows rapidly. Usually Baseline IT including communication (e.g., email), collaboration (e.g., SharePoint) and data management as the foundational components in starting the journey. Process Management amongst internal stakeholders as well as with external stakeholders and Evidence Management become the next-in-line important areas. While this can be true for any industry in general, regulatory and security constraints of Life-sciences industry make these areas a bit more complex. Once these components are set to allow running business as usual, investments in Digital Engagement with physicians and patients and other Innovation areas form the basis of differentiation in terms of setting up a business model. Organization need to transform and adapt in order to survive and manage eventual rapid growth. They need the right expertise to become future-ready by leveraging technology and innovation. Read about our capabilities across the life-sciences and healthcare value chain. Start with a dependable IT roadmap It is important to start with an end reference architecture in mind in order to build a dependable IT roadmap. We have identified a detailed roadmap that spans across Research & Development, Launch & Commercialization areas with activities that are foundational, advanced and potential differentiators. To read more about these, please download our detailed “CIO Playbook for Managing Exponential Growth”, and engage with us in a conversation regarding the challenges that you are facing as an organization from an IT perspective. While developing this roadmap, it is important to balance immediate requirements and at the same time long-term vision of the organization. Our interactions with CIOs and IT leadership across various life sciences organizations have helped us identify key questions that need to be answered during implementation of an IT roadmap in the growth phase across all three areas: I. Research & Development While the foundational components of today are developed in an opportunistic fashion today, will we be able to use them in a repeatable manner in future? What will be the communication and coordination requirements of the organization, based on the broader business strategy? What competitor intelligence and predictive analytics will be required to support future pipeline decisions? II. Launch What are the launch planning, tracking and execution systems that are required to support the channels appropriate for the products? Which data and information needs have to be harvested today in order to support pricing and forecasting decisions in the future? What are the assets that are being developed during launch stage that can be leveraged in the future? III. Commercialization What are the right platforms to support the products and sales & marketing strategy for the initial product(s)? What is the optimal investment in customer relationship management suite and how should ROI be ensured from this system? What will be the right differentiators in terms of physician or patient engagement, and how can today’s investments support them tomorrow? 5 focus areas to enable the right foundational investments We have identified five “focus areas” where right foundational investments have proven to create value in the longer run as companies’ scale: 1 Enable collaboration & information management Pharma and biotech companies grow very rapidly as they cross each hurdle during development, approval and launch phase. With this exponential growth at every stage, it is critical to preserve the core DNA of the company in terms of both knowledge and culture. Having scalable, easy to use and widely adopted collaboration platforms and knowledge management systems plays a large role in facilitating rapid onboarding of large teams and empowering them to operate efficiently in a short time. Read More > 2 Data & evidence management roadmap Data and evidence have constantly proven to be the most critical assets for life-sciences organizations in the new era of evidence-based medicine. Organizing this data and evidence in a transparent and easily accessible manner, so that it can be used easily in the future is again critical. Data management decisions made with only a short-term view have time and again resulted in larger eventual costs. Read More > 3 Analytical capability development With growing amount of data and its applications, the field of analytics has expanded beyond imagination. Analytical applications depending on the use case require a diverse set of functional and technical capabilities. It is important to develop an analytics roadmap not only with a clear mapping of required internal skillsets, but also a right mix of the partner ecosystem to support these requirements in a timely and cost-efficient manner. Read More > 4 Omni-channel customer engagement infrastructure World of e-commerce and commerce are now just seamlessly merged; it’s not about the phone or the desktop or the store – it’s about all of those, and marketers need to provide consistent and seamless experience regardless of channel. We’re quickly moving to a world where digital has gone beyond mobile to include technology-enabled screens and connected screens of all forms and sizes, thus requiring a truly integrated strategy. Read More > 5 Digital patient engagement infrastructure Patients research medical information, treatment options, provider selection criteria and dozens of other considerations, and have a great deal more “skin in the game” now. Moreover, in a patient-centric world, convenience is the new competitive currency and hence engaging with patients in a convenient and content driven manner will form your differentiation in the longer run. Read More >   Recommended Reading: Elevating provider engagement to the next level: The health care landscape is transforming at a fast pace with futuristic hospitals and physicians seeking more innovative ways to expand beyond the usual patient engagement. Explore the endless possibilities in personalized health care and next-generation physician engagement. With all of the above trends, while progress and speed of execution are necessary, planning is paramount at this stage. We recommend a three-stage process. 3-stage process to handle trends for growth and expansion (a) Current IT Maturity Assessment which is a diagnostic process to evaluate where the organization stands today & where it needs to be in terms of maturity and sophistication to align IT with broader organizational strategy. (b) Roadmap Development & Planning to enable a path to desired state of maturity over next 12-36 months in line with other anticipated business milestones, within a manageable budget. (c) Balanced Scorecard to Monitor Progress against the plan on a continuous basis and identify the need to course correct at an appropriate time. With the above planning process in place, we have ample experiences in helping clients understand their current IT footprint and develop a long-term enterprise IT strategy that aligns with their broader organizations business strategy. While it’s important to be best-in-class in some of the key areas that are core to your business, our approach offers a pragmatic view in terms of helping clients not to over-invest in areas which are non-core. We would love to hear from you regarding the IT challenges you are facing in your organization. Please write to us with the IT challenges and learnings you have faced at insights@nagarro.com or connect with us directly at ashish.gupta02@nagarro.com and purav.gandhi@nagarro.com. Table of Contents Introduction Challenges faced by young life-sciences organizations Key elements to ensure fundamental readiness at a small scale Evolving IT requirments Build a dependable IT roadmap 5 focus areas to enable the right foundational investment Planning for exponential growth: 3-stage process

The key to winning in an on-demand world: Pricing.

In an on-demand economy, every business is a revenue-management business. As industries get disrupted by the change in consumption models, the opportunities for both the challengers as well as the incumbents lie in maximizing the ROI and optimizing their assets, which is what the principles of revenue management best provide. The goal of any business is to maximize profits, which usually means maximizing revenues (exceptions would be industries where winner-takes-all type dynamics rule, in which case the company has an incentive to maximize revenue without consideration of profit, at least for a period of time). Given the advances in technology and changing consumer behavior, on-demand companies are popping up and upending incumbents who had built solid businesses over a period of several years. A look at companies like Uber, Oyo, Airbnb, etc. reveals that they have all grown in an industry with one or more of the following characteristics: High fixed costs: Means low profit margins to begin with, which provides a reason to look at algorithm-based pricing models to maximize revenues. Use-it or lose-it goods or services: Means that the producer has to sell goods or services at a price that is at least equal to or higher than the marginal cost of producing that good or service. Regulation: Makes the market less efficient. This is a reason for entrepreneurs to disrupt the existing players, on the back of other technological advances which incumbents are late to adapt to (Uber changed the paradigm of black-yellow regulated taxi services to any private car that can be called using smartphones). Lack of goods substitution: Means that consumers cannot swap out their demand for one product (say, a room to stay for the night) with any other product, which means demand for the product remains, regardless of pricing. So how does revenue management play into all this? In any business, the easiest way to maximize profits is to generate the maximum amount of revenue per unit of cost that goes into generating that revenue. This is especially true in high fixed cost industries (like airlines, hotels, car transportation), where maximizing revenue for every unit of cost associated with that business is done by price discrimination: the philosophy that allows companies to charge different pricing based on factors like demand-supply, macro factors like weather and economic conditions, as well as the willingness to pay of the consumer. In use-it-or-lose-it industries, by understanding the relationship between demand and supply and the levers that move the price (the intersection of the demand and supply curves), companies can use demand forecasting and customer behavior analysis to determine optimal pricing of their goods and services, thereby maximizing revenue and profits. This is what revenue management is all about. By applying revenue management principles, on-demand economy companies like Uber (upending the regulated taxi services) and Airbnb (upending the hotel business) have built systems that provide real-time pricing using demand forecasting and customer behavior analysis and successfully consummate that transaction by becoming the matchmaker between the buyer and the seller. This has enabled them to build leadership positions in their industries. Ironically, this has given enough reason for the incumbents also to look at new age revenue management systems to stay competitive and to maximize their profits. The beauty of revenue management principles is that it is applicable to any business: not just the ones that are getting disrupted. Any business that uses assets to generate revenue is fair game. As days go by, the demand and application of revenue management systems will only go up as incumbents in disrupted industries play catch-up and new industries get disrupted. The other trend that bodes well for revenue management system adoption is the ‘on-demand-ization’ of traditional industries. Commerce fulfillment and last-mile delivery are some industries that are getting to experience the benefits of on-demand recently, and which will only continue to grow. The airline industry has been a pioneer in introducing mathematical revenue optimization for ticket pricing. In the industry itself, Lufthansa has been a leader in this area and Nagarro has helped them develop strong revenue management capabilities. Along with established practices in design, digital and data analysis, Nagarro is able to apply agile methodologies to leverage these capabilities and maximize profitability for its clients.

Delivering successful IoT projects – An Insider’s View

IoT projects demand a new type of consulting approach, one that encourages an intense collaboration between domain experts and IT consultants. This approach needs to bring together the breadth of integration know-how as well as depth of domain and device specific knowledge. One consulting company that handles this approach well is Vinnter in Gothenburg, Sweden. Vinnter’s CEO, Eric Michélsen, has extensive experience of handling IoT from an industrial perspective. He joined the company after a long stint with the Swedish lock manufacturer ASSA ABLOY, where he headed development efforts to add connectivity to the company’s products. Michélsen had experience of partnering with Nagarro during his tenure at ASSA ABLOY. The relative complexity of IoT projects, the lack of standards and unresolved security problems call for new forms of partnerships and cooperation. This is the reason why Vinnter and Nagarro decided to collaborate to form a mutually enriching partnership. “While we come from the hardware based technology development side, Nagarro brings in the expertise in software services,” explains Michélsen. Eric Michélsen believes that the essential ingredients for a successful IoT project are understanding customer expectations, identifying a roadmap of business goals and adapting technology to meet the evolving future business goals. He cites the example of an industrial organization that has been manufacturing a proprietary mechanical product for 40 years to illustrate his point. The organization now faces the need to add connectivity to the product. This calls for an app and once this is available, the company will then interface with its customers via the app. And in doing so, a new business model is created. “The sales process will be totally different compared to marketing a mechanical product. It becomes a new relationship driven business model. The customer needs help to see the whole picture - from brand management, 24/7/365 operations, data analysis and pricing to building a call centre.” This is also how Nagarro perceives an IoT system – in four parts comprising of devices, connectivity, platform and a new business model that these three finally culminate in.  Nagarro is a premium global IT services provider with a strong value proposition in bringing next generation technolgy innovation to its clients through its Enterprise Agile model. Kanchan Ray, Head of Innovation at Nagarro, believes that “Nagarro as a System Integrator, can connect millions of apps with billions of users with trillions of things”. Kanchan says that we are currently in the third generation of IoT platforms (after mainframes and client-server computing) where machine learning and analytics are very important components. Michélsen meanwhile estimates that 34 percent of companies have already implemented IoT projects, 22 percent are in the process of doing so and 34 percent are planning on some form of IoT based projects. More interestingly however, only four percent of companies have made a business out of IoT. He also emphasises the importance of analysing data. This can be done with both - data that the customer already has and data that customer gathers. “Intelligence must be added and we are developing intelligent tools for analysis that enables the data to tell their story.” And this is one of the many reasons why Vinnter’s cooperation with Nagarro is so essential. Vinnter can now leverage Nagarro’s deep knowledge and practical experience in analytics and other domains to enhance their IoT capabilities. “We (with Nagarro) now possess know-how in hardware, communications, cloud solutions and business models,” says Michélsen. Over the years, Eric has seen both successful and unsuccessful IoT projects. The fundamental reason why services or products do not succeed is because customers reject them. It is not always certain which services will survive the churn but in many cases, these have been the internal services, that are able to live up to the agreed level of service and support. In the early days, one reason for failure was also that the cost for sensors, connections and communication were far too high compared to what they are today. He has also seen projects where the suppliers have decided how much they are going to take in revenues which means once they have decided what to earn, services prove to be far too expensive for customers. From his experience of managing Industrial IoT projects, long before the IoT concept became popular, Eric Michélsen believes that every service within an IoT project must be separated from the one another to see how the service really “feels”. He also gives importance to separating services at the earliest possible stage, to enable you to do a ROI analysis on each service individually. “When services are baked into each other, or concealed in the physical product's business, it's difficult to see how well each of these services is actually doing,” says Michélsen.

IOT 101: A primer on Internet of Things

The Internet of Things (IoT) has become a popular buzzword around technology forums and organizations in recent years. But, how many truly understand the IoT system? IoT consists of a complex web of varied parts including electronic circuitry, embedded systems, network protocols and more. This quick guide helps you classify and understand these various parts of the comprehensive IoT ecosystem.