Recent posts

The hitchhiker’s micro-guide to the Cloud

What can be outsourced into the Cloud? How long are the lead times? And how do you go about it? There is a lot out there that can give you a breakdown of the what, when, and how; but the experts at Nagarro recommend checking beforehand and performing a proof-of-concept instead of tinkering around with quick solutions. For starters, a Cloud solution is expected to be more cost efficient. This can be true, however, availability, agility, and fail-safety are becoming priorities as well. In the Cloud, there is no such thing as a clean bill of health. Here, necessity is the mother of all invention. Scan for services The most important initial question is, what services do you need for your business and where do the critical points lie? A change in Cloud infrastructures should NOT be calculated based on servers, operating systems, and applications. It is more expedient to screen departments and services and to subsequently assess feasibility, cost-effectiveness, and availability. At Nagarro, in a medium-sized company with 200 to 500 employees, approximately one week is scheduled for this analysis. We differentiate three areas in IT: Core systems for sales and production Nagarro experts recommend as a rule-of-thumb to keep systems that support the core production or critical processes in their on-premise environment. Here, the question of possible latencies is key. If latencies are unacceptable, the verdict is simple: no Cloud suitability. Back-Office, ERP & Co. Messaging, CRM, ERP- classic back-office solutions - all standardizable applications are well suited for the Cloud. Best case: Replace it by a SaaS solution. If not, it’s valid to assume if an application can be placed on a virtual machine, it is 95% Cloud-compatible. Exceptions can occur in certain industry solutions, like with ADVOCAT, when the software’s client-server model does not (yet) support communication over the Internet. Websites, eCommerce Genuine no-brainers are all systems which are strongly directed and/or which are already operated externally. These services, such as websites, eCommerce, extranets, portals, etc., are gaining popularity due to increased bandwidth, availability, and flexible resources of Cloud providers.

Building the next-gen adaptive learning environments

Wearable technology or simply wearables are smart electronic devices that can be worn on the body as accessories. In recent years, several types wearables such as wrist wear, clothing, eyewear and more have emerged onto the consumer scene. The opportunities in the market for wearable technology continues to grow and find new niches. In fact, industry analysts predict double-digit growth rates over the next 5 years with more than 230 million wearable devices to be sold in 2020. (IDC, Gartner). Nagarro , too has recently got an opportunity to participate in an advanced research project on the application of “wearables” in education, launched by the “German Federal Ministry of Education and Research” (BMBF). The aim of the SensoMot project is to use Sensor measurements to determine Motivation indicators for controlling adaptive learning contents. The project duration is of around 3 years (from April 2016 to March 2019) with a total investment of € 1.68 million and includes a consortium of 6 partners from industry and research community under the leadership of Nagarro. How it could work in the future? Let’s say you are enrolled in a university course for electrical engineering and simultaneously pursuing a distance learning course to become a technician. You want to start studying through your course material right away. You put on your wearable fitness tracker, connect it to your tablet, and click on the e-learning app. You start with a difficult chapter on resistors connected in series and parallel , and begin to read the text with abstract formulas. The wearable monitors your heart rate, skin resistance, hand movements, and other parameters. The system detects that your motivation level goes down after a while. It then adapts the learning content and proposes a different educational path with more visual explanations, where you can “play” with an interactive graphic and somehow grasp the concepts in “learning by playing” mode. The system could also make it easier by simplifying the problem and start with two resistors and make it more complex in a second step, insert some intermediate exercises to give you a feeling of success or propose to repeat some basic learnings. Depending on the student’s level, whether he or she is more of an analytical or visual learner, content might be adapted dynamically to maintain high motivation and avoid boredom or overstraining. Project overview and goals Motivation is a major factor to facilitate deep learning processes. It makes learning fun and interesting, and as a result improves the overall learning process. If motivational problems are detected early, the learning processes can be modified and the learning content can be adapted to the needs of the student. The goal of the SensoMot research project is to predict critical motivational conditions using sensor measurements collected by wearable devices. By deriving adaptive mechanisms, the learning process can subsequently be tailored to meet the motivational needs of the student. Based on these objective sensor data (e.g. indicating stress or boredom) algorithms for the learning environment will adjust internal variables like the learning speed, level of difficulty or propose alternate learning paths. Prototypes of adaptive learning scenarios for a university course on “Nanotechnology” as well as a distance learning course on “Electrical Engineering” will be implemented in the project. These will be integrated and evaluated based on the e-learning and e-testing platform “CBA ItemBuilder”, a product developed by Nagarro for its customer DIPF (German Institute for International Pedagogical Research) which is already in use for many years. The resulting learning systems will be made available to a broad variety of educational applications thereafter. The innovation around SensoMot will facilitate (for the first time ever) detection of motivational obstacles in the way of learning, thanks to non-obtrusive sensors, and adapt learning content accordingly. Learning motivation increased in this manner could lead to greater learning success and lower dropout rates in a wide range of technology-based learning situations. Challenges We are facing three major challenges: Can we find wearable devices and provide the “right” sensor data, in a streaming mode to be able to deduce motivation levels? This is not obvious as many of the commercial wearable devices do not provide open interfaces and access to raw data. Typically heart rate, skin resistance, eye tracking and mouse/keyboard movement are used in similar research projects. Can we apply adequate pattern recognition and machine learning techniques to infer motivation levels from raw sensor data? The intended process chain is depicted in the diagram below, which shows consolidation of different sensor data via pattern recognition and machine learning to identify motivation indicators and dynamically adapt the learning content accordingly. Can we ensure that this sensitive data is secure enough to build the necessary trust so that students are willing to use such a system in their e-learning process? Outlook The result of this research could be a milestone for building adaptive learning environments and making learning processes more efficient, particularly in the context of lifelong learning. But even beyond education; for many other applications the ability to detect motivation levels can be envisaged, e.g. to assess the quality of children nursery or aged care work. Project consortium partners SensoMot is conducted by a consortium of 6 industrial and research partners and sponsored within the program “Tangible Learning” (“Erfahrbares Lernen”) of the BMFB. Nagarro GmbH - Munich, leading the consortium, will provide the integration of sensor data collection into the “CBA ItemBuilder” platform and develop the basic mechanisms for adapting the learning content at runtime. The German Institute for International Educational Research (Deutsches Institut für Internationale Pädagogische Forschung) - DIPF is responsible for the scientific project co-ordination. The distance learning institute “Fernlehrinstitut Dr. Robert Eckert GmbH” (Eckert) ) is the second commercial consortium member besides Nagarro and will provide the test bed and validation for a distance learning course on “Mechanical Engineering”. It is the natural first user of such a technology. The “Technical University of Ilmenau” ( TUIlmenau ) will provide the test bed and validation for a university course on “Nanotechnology” and contribute with its competency in usability engineering, man-machine interaction, and media engineering. The Medical School Hamburg (MSH) will bring in its research competence for the theoretical and empirical identification of motivation indicators and the derivation of adaption algorithms. The Leuphana University Lüneburg (Leuphana) will provide the machine learning technology to deduce motivation levels from the raw sensor data.

Ever wonder what happens to your feedback survey?

Every quarter we send a survey to all of Nagarro’s clients asking them how we are doing across a range of parameters. A number of our clients are not sure what exactly happens after they hit that “send” button and how that data is used, so I thought it might be helpful to share some of that detail.

Forget 'out-of-the-box' thinking and explore the riches within

With headlines like “Apple under Tim Cook: A nicer company, but a better one?”, some people are questioning why 5 years into his tenure Apple has not delivered a breakthrough product. Such is the harsh reality for product companies whose business models are predicated on year-on-year growth.

Does the ‘Handshake Deal’ still exist?

Most probably, you all have discovered this phenomenon - you plan an important project (let it be a new kitchen, renovation of your flat/home, you move to a new city). You do not do this often, it is not your core competence at all and it is a bit complicated to assort all the options you have, what to do in what sequence, what on your own and what with professional help. Sometimes you get lucky and a player comes into the game you can blindly trust because he/she delivers the famous "handshake quality". What is the difference? What are the treatments of this cozy feeling of being handled well? Here are a few (non-exhaustive) elements: You get the feeling of being understood immediately (a lot of intelligent questions showing the "have done it before" experience level). There is a bidirectional flow of inspiration getting into place - you learn more about how to solve these kinds of problems because the problem resolution processes are transparent. You get a structure before solutions of singular problems. So, you get a feeling of the "path-through". You get an awareness of upcoming decisions ("no surprise"). In case reflection of a decision is needed, it is done in an open way based on arguments from both sides. Once the decision is taken (whatever direction), the switch is on "execution" for all. There are exceptions (otherwise it is not a problem we talk about here) but each exception is acknowledged in an objective way and always comes along with either a solution proposal or, in case a bit more complex, with options and a recommendation. After a few trust building successes, you will loosen controls and transfer more responsibility. In a fortunate coincidence, you find a true partnership leading to claims like – “he/she handles my renovation as it were his/her own” – the highest level of trust. There is maximum care about budgets and resources coming into the game and the relationship develops. You, of course, recommend that service setup to all your friends around you and in case you can, you give-back these favors. When you explain this to your friends you tell them that you were doing “handshake contracts” and a word given once, counts. This is, of course, a seldom coincident and it can only be established if the circumstances are set in the right way. Here are a few elements to be checked: Do recommendation and trust play a major role in your project setup? This can happen in very local setups (e.g., renovation within your hometown where you meet all the craftsmen during social live in tennis, golf or in your children’s school). Is quality a major driver for your project and can you afford to invest in it? Typically, you apply this kind of ecosystems when stakes are high in regards to risk, the number of resources you engage or the expected result. When you transfer this way of thinking into IT-projects, and when assuming the above mentioned two criteria can be applied, does the following make sense? Purchase departments (sometimes inflated with external contracting consultancy) shape contract frameworks consisting of tons of mini-clauses trying to catch all exceptions limiting the room to maneuver for business and suppliers to zero? The effort for contracting and contract adoption gets a volume that endangers problem resolution resources from time and effort perspective? Of course, it will become unhealthy to try it without contracts; but the contract shape, the solution, and the delivered customer experience should be in sync, as a basis for more handshake quality.

Connecting the omni-channel jigsaw puzzle

I recently read a very nice book that I purchased from Kindle and wanted a few of my friends/colleagues to read that. When I recommended the book to them I also asked them to check if their Amazon/Kindle are also recommending the same book for them. For some people, it was there somewhere in the 9th or 10th spot and in many cases, the book did not even feature in their recommendation lists. So, it looks like even the most sophisticated ML/Analytics-driven recommendation engines are not yet as 'knowledgeable' as us. We are slowly getting into an 'analytics-led' age where every B2C (or even B2B) company is trying very hard to build the profile or the genome of their customer. Understanding the customer is key for any B2C/B2B business and that starts with an accurate customer profile that is created by collecting lots of data about the customer. The lure of ‘data as the new currency’ forced these companies to invest quite a bit on their 'big data' and ‘analytics’ strategies around this and most of them are yet to see any real ROI. Most companies now do have an omni-channel presence to some extent, and have invested a lot to make their presence felt across multiple channels – but, that’s about it. So what went wrong? The first problem was that they went too fast too soon. Most companies went from no customer information to a lot of data (not insight – plain data) in a very short period of time. They just went crazy collecting customer data without a real strategy to use them. Since the cost of data storage reduced dramatically, it became cheap to store them in data lakes. In a sense, the data hype was similar to the dotcom boom, when everyone went to the internet without a proper internet ready business plan or strategy. "Gartner has predicted that through 2018, 90% of deployed data lakes will be useless. That word “useless” should grab your attention. It’s worse than a “failed project,” Source: Forbes The second problem was the lack of maturity of data science. Companies hired a lot of data scientists to slice and dice their data to extract magic out of that in order to up-sell and cross-sell. The algorithms were neither robust nor proven and hence the big data pretty much created mere classifications and segments. The 'Netflix' model of recommendation was probably pushed to a lot of use cases where it does not even apply. Soon these companies realized that actionable insight is not the same as information when it comes to creating a customer profile. The third bit that went wrong was the customer profile model itself. This, in fact, is the biggest problem of a big data implementation. Let's take the customer genome of my friends who are buying books online as an example. There are probably at least 50 attributes that affect the customer's book buying pattern (recommendation from friends, review in a newspaper, price of the book, mood to name a few). Some of these factors are understood by data scientists to some reasonable extent, but there are attributes that are still way to fuzzy to factor in a real algorithm (emotional attributes like mood for example). Even if a recommendation engine factors 50% of the attributes - say 25 of this 50 - that means there are 126 trillion different ways to pick 25 attributes from a set of 50. So, what are the REAL chances that the algorithm picked the right set of attributes? Currently, using a large number of attributes in any calculations is probably out of question for most algorithms as it needs an enormous computing power. Hence most companies are choosing the easy attributes and that reduces the accuracy and quality of the outputs. Disconnected omni-channels do not work Most companies created their omni-channel presence in a disconnected manner. They have their presence for many use cases across multiple channels, but the critical piece of stitching them together is missing. Mere presence on eCommerce platforms such as YouTube, Twitter, and Facebook; as well as tracking them individually not only give limited data, it sometimes creates a false customer genome. As mentioned earlier, there are trillions of different combinations possible with customer attributes. Now imagine these attributes are spread across multiple channels and no one is connecting them back. It’s like solving a jigsaw puzzle with trillions of pieces in them - can’t be that easy, right? Then there is also the issue of missing data. Even if a company is present across multiple channels, a lot of critical customer behavior happens outside these channels. These data points are not captured and hence never factored in the algorithms. For example, Amazon has my book browsing and searching history and is trying to put together a profile of me, but Amazon does not have access to the discussions I had with my friend on a topic and books related to that. Moreover, they don’t have access to my Google calendar to see what type of meetings I am attending. To build a correct customer profile with such missing pieces is almost like solving a jigsaw puzzle with 100 trillion pieces where half of them are missing. Figure 1 depicts how my customer profile data is distributed across different channels during the day: Fig 1: Customer data owners during the day It is clear that not a single enterprise has access to all the profile data and hence can’t really build a 360° profile or genome of a customer. Also, note the amount of offline data that we still generate. Connected omni-channel is the future The three key aspects that need to be considered while building a customer genome are: Focus on more attributes - try to capture as many attributes as possible. The larger the number of attributes, the smaller are the missing pieces in the jigsaw puzzle. Focus on the emotional attributes as well and capture them, as analytics algorithms mature. They will be able to use these data in future. Connected omni-channels - the data collected from different channels will need to be stitched back together to create the full customer genome. Connecting the browsing data from Amazon with the video watching patterns from YouTube will reveal more deep profiles. Data sharing with others - as it's very clear that one company will never own the data for any customer, it's important to build a data sharing mechanism with other players. If a company can combine the YouTube data with the cable TV data, that will build a more complete jigsaw picture. In many cases, it will mean collaborating with rivals, but that’s a different problem that needs to be addressed. It is evident that building a customer genome from omni-channel data is an enormous jigsaw puzzle that will never have all the pieces. But, when companies introduce real-time personalization to their omni-channel strategy, customer profiles will have fewer gaps.  A robust omni-channel strategy is a good start, but connecting them will get it closer.

Risk Management: A fine balance of sharing

Every engagement has a host of uncertainties associated with it. These uncertainties bring in the risks to the engagement. When two parties contract with each other for an engagement, the customer wants to handover all the risks to the vendor and vendor wants to take over none of these. This is not a practical situation. A fine balance of risk sharing has to be achieved between the two depending on the respective stakes involved for each one of them. How do the two contracting parties agree on this balance? The risks must be assessed by both the parties in terms of what is the probability of risk occurrence and what is the likely loss if the risk happens. The likely impact of an event (risk value) is a product of probability and potential loss. Now the vendor can take a risk where the likely impact is a factor of the total contract price. If the risk value is significantly larger than the contract value, then the vendor is unlikely to take on the risk upon themselves. We also need to understand, who is in a better position to mitigate the risk. If all the mitigation is with the customer then trying to handover the risk to the vendor is unfair, but if the customer is entrusting a critical IP to the vendor, then the vendor should take due care of it. If the vendor fails to protect the customer IP, then the customer can impose huge penalties on the vendor for handling it casually. For a successful relationship, a common understanding and open discussion must take place between the customer and the vendor to have a single view of risks and decide upon who is in a better position to mitigate these risks. They need to define a joint mitigation plan and accept responsibility of their part of misses. Consequential and loss of business risks are very hard to take over for a vendor. The penalties can be rightly assigned based on the responsibilities and likely returns to the respective parties. Beyond contract, when the engagement materializes for a win-win, all the risks of the engagement should be jointly maintained and reviewed frequently by both the parties.Demonstrating actions taken by the parties for mitigating risks helps in cultivating mutual confidence and a fruitful long-term relationship.

Shoot ‘em up… 1… 2… and 3

Space Invaders - A vertical rectangular video game that is a digital representation of a battle between aliens. Not everyone might have played this, but it is understandable from the image here that the point of this game is to Shoot them up! Three interesting observations we can extract from this classic game to draw some analogies with, when and how we troubleshoot in a technical scenario: 1. Troubles if not handled early, will increase (aliens’ speed of descending increases with time). 2. In a majority of cases, if not all, there is a pattern that we need to decipher (aliens travel right to left and then down). 3. We need to maneuver, aim and then shoot (assess which alien is of what type, which one is near, what it can do, move to that area, aim and then attack). This brings to the table, three key attributes for any troubleshooting exercise: problem analysis, problem identification and problem rectification. Troubleshooting (i.e., rectification) in all senses is a chicken-and-egg type of problem, when you need to know how to troubleshoot, you don’t have enough information to know what to do (i.e., identification), and once you have this information, you don’t necessarily need troubleshooting, because you can actually pinpoint the problem based on your own experiences (i.e., analysis). It is a vicious cycle, really. But as they say, experience builds on practice and to practice, you need to start somewhere. Let’s explore these three simple steps to troubleshoot any type of technical issue you might come across. 1. Problem Analysis What: Problem analysis starts with defining the problem as clearly as possible. It could be a simple statement or a statement supported by symptoms or constraints or any other supporting details. This definition should serve as single source of truth for the whole duration of troubleshooting exercise. Where: Once a definition is worded, it needs to be isolated from the larger context to the narrower context. To begin with, there will be multiple probable contexts that may cause the problem. Your past experience, your ability to think in different directions and see possible connections will come handy during this step. Try and think as wild as you can. It’s ok to be wrong and way off, but think through all possible places that could be the reason of the problem. Why: For each of the identified contexts, think further to identify why this may be the origin of the main problem. Think of various reasons, what could lead to this particular context be the reason of the real problem which you are trying to troubleshoot. Interestingly, for what part, you just need to ask the right questions to people (if it is not you yourself) who are facing the problem and word them as unambiguously as possible. For where, you need to think in multiple directions, talk to various people (who belong to the scenario) and discuss all possibilities and for why, you can consult multiple subject matter experts to pick all sorts of possible threads. In a nutshell, problem analysis can be a very collaborative and social exercise if done in a structured way. As an example, let’s see some possible where and why for a sample problem (what) to troubleshoot: What: There is a delay in check-in process of the application. Where: Delay caused by database queries / stored procedures Delay caused by server side application Delay caused by client side application Delay caused by app server infrastructure Delay caused by database infrastructure Why: Delay caused by database queries / stored procedures Missing index is causing a lengthy table scan Inapt use of count(*) at multiple places causing extra data in result set Use of dynamic queries restricting caching of execution plan, causing addition delay Delay caused by server side application Inefficient logging mechanism taking additional time in every request Delay caused by client side application Service call and UI update is happening in single thread Data virtualization in result grid is not done Chunky calls to fetch data from server Service connection is not pooled Delay caused by app server infrastructure Inapt hardware capacity Inapt configuration settings at web server causing unnecessary delay Delay caused by database infrastructure Inapt hardware capacity Frequent backup policy causing intermittent load Inapt database configuration settings causing un-optimized query response time Well, this is cumbersome if we think of writing this list as is. There is a better alternative, called fishbone diagram. Fishbone diagram is an interesting tool to do such problem analysis in a fun and iterative way. There are multiple online and offline tools which let you draw this diagram for storage and retrieval. To begin with, when doing your analysis you can start on a whiteboard and see how soon your analysis will start making sense. A fishbone diagram for discussed problem may look like: Fig 1: Problem analysis over fishbone diagram 2. Problem Identification Once you have a certain level of analysis, it’s time to start narrowing down the analyzed space and identify the root cause of the problem. To begin with, you might want to tweak the diagram to re-order the identified cases by following ‘close to far’ principle. The closer the reason is to the ‘what’ the more likely it is the case. And this you can do based on your gut feeling, past experience or all sorts of other exposures that you may have. It may be wrong, but then it’s fine. It will be right the next time, if not this time, you have to start somewhere. Problem identification is an iterative activity and generally continues from few hours to few days depending upon the scenario you are trying to troubleshoot. For this very reason, you need a snapshot view handy of where you are in your identification process at all times. Color coding identified reasons in the fishbone diagram itself, is the cleanest non-intrusive way to capture this snapshot view as you dig deeper in each of the listed Why. The following could be five status and colors you can use: Blue: Case is to be checked. Yellow: Identification is in progress. Green: Case evaluated and not found to be the reason. It is clean. Red: Case found to be true and fix is applied. Purple: Case found to be true, but actually not impacting this very problem and therefore deferred for further investigation later. Same fishbone diagram with snapshot view will look like following, when updated with status of ongoing identification activities. Fig 2: Problem analysis over fishbone diagram (with status captured) 3. Problem Rectification The simplest step is to fix the problem when you know it. You may end up having one or more reasons causing the main problem. Once you have fixed those, the troubleshooting is completed successfully. One or more red nodes in your fishbone diagram depicts this successful status. For some tricky scenarios, even after few red cases, you still may face the problem and this calls for another cycle of 1, 2 and 3 above. However, this time over the existing fishbone diagram to find more Why in each Where or even identify some new Where cases.

4 key considerations for an effective automation testing strategy

Considering the wide user base that a typical web application targets, the need to test software systems on many browsers, platforms and mobile devices has enhanced the testing scope dramatically. However, testing with only a manual approach is not only inefficient but error-prone. An automation testing strategy for web applications is a must to enable quick time to market, maximize coverage of a wide testing scope and optimize cost. It also ensures that there is no room for error in the current and previously developed scope. Here are four key considerations to keep in mind for an effective automation testing strategy. Previously developed scope Automation today is an integral part of any testing strategy. Automated scripting is done in each sprint and an automation regression test suite is then gradually built. The automated scripts should not just validate the basic functionality of the web application but should validate the core business logic as well. Automated scripts of previously built scope are then integrated with build and deployment processes. Continuous Integration ensures that the previously developed scope is automatically regressed before we start testing for new functions. To give an example from my own experience for a client in the US, a leading provider of nonprofit software and services, we have been building an automation regression test suite for many years. We have automated more than 3000 test cases using a keyword-driven automation framework. Quick time to market While a DevOps practice like the one we have at Nagarro ensures that most of the manual tasks of development operations are automated for the release, by running tests against the codebase on a frequent basis after each code check-in, the testing team can provide rapid notifications for any defects, particularly those in unchanged areas of the code. This means that the software is rebuilt and tested every time the code is changed. For one of our clients in the gaming domain in Europe, we used the entire DevOps tool chain including JIRA, Stash, Sonar Qube, Jenkins/Hudson, Selenium Web Driver and automated end-to-end release cycles. Then we coupled automated release cycles with automated regression suite resulting in shorter testing cycles (moved to fortnightly releases from the previous quarterly cycles) and hence reduced time to market. Cross browser testing A cross browser testing strategy mandates the use of automation focusing on testing the business logic. We can use customized automation frameworks developed in-house to facilitate in parallel execution of test cases (for instance, if we run 3 tests at a time, then it would be able to finish the whole suite almost 3 times faster). Test cases are run against different browsers and platforms at the same time. For another one of our clients, we enhanced the automation coverage from 0% to almost 80% and reduced the execution time for regression to 2 days from 4 weeks through automated parallel execution. Sensitive nature of user data Sensitive nature of user data and unavailability of production data makes the entire automation testing strategy more challenging. Data such as customer’s personal information (phone, address, DOB etc.) and tax account related information (account no, name etc.) is highly confidential and can easily be misused. Automated data masking on production data can be used to ensure data generation for web based automation framework. In addition, automated data generation scripts can be written for some exceptional cases for which data in production is non-existent. Finally, an effective automation strategy is one from which we can learn, that we can improve over time through predictive analytics and applied business knowledge.

6 ways to boost your social intranet adoption

Why do employees who are hyperactive on Facebook become introverts on their company’s internal social networks? Is the lack of usage purely a ‘cultural’ thing? Why are some company intranets and internal collaboration sites more successful than others in engaging employees? What role does the technology platform play, especially when the end-user is not even aware of the choice? Does a management diktat help ensure increased adoption? If you are an organization that finds itself troubled by these questions, here are some suggestions on what makes an intranet portal truly collaborative. These are applicable to intranets across various stages of digital maturity i.e. from being just an information publishing site to a document or content repository, to a consolidated platform for completing tasks, to a truly collaborative idea generation engine. 1. Make user adoption the focus You may have built the best-looking intranet site using best of the breed platforms, but that doesn’t guarantee user adoption. Here are some practical suggestions you may find useful: Avoid the ‘big daddy’ mandate-driven approach. Make intranet usage optional to start with, with gradual incentives towards migration. For example, send teaser emails on company-wide announcements, asking users to visit the intranet for more details. Start putting notices about company events on the intranet and use other media (such as email and notice boards) to remind users to go to the intranet to view details. Essentially, gently nudge your employees towards the intranet portal daily. Try making the portal an essential resource tool for the employees. Get your staff into a habit of accessing the link daily. Perhaps your daily attendance could be through the portal. Include a fun element. Run a contest for users with maximum followers, those with most cross-department comments, or those with most cross-geography ‘likes’ on an idea. Contests, polls, and quizzes, anything that persuades the user to share their responses or points of view, rather than just view the content passively. Another idea is ‘gamification’ of the user experience, where users could compete against others in a simulated environment. I have seen this increase user adoption dramatically. Intersperse text with infographics, videos, pictures, charts and slide shows. For example, uploading a video or another digital asset might attract higher reward points, as compared to a simple text update. Keep in mind cultural, language and time zone sensibilities to ensure adequate participation in a multi-country environment. Try and provide personalized content to the user, rather than making the portal a dumping zone. As examples, ‘Recommended news’, ‘Your tasks’, ‘Suggested groups’ and ‘HR policies in focus this week’ may help. Tag people frequently. 2. Make it mobile first, and good looking Designing apps and portals specifically for the mobile users is no longer a ‘nice-to-have’. To ensure adoption, especially by remote workers and on-the-field sales force, seamless access the intranet without major functionality handicaps is key. A recent Google survey said that mobile users are five times more likely to abandon the task they are trying to complete if the site isn't optimized for mobile use. A slick, intuitive, and responsive UX design is imperative. Additionally, for apps and portals in which user engagement is paramount, I recommend going for a well-made native mobile app. 3. Establish a good moderation framework Establish an effective intranet steering group to manage the structure and rules of engagement, to direct good usage and developments, to discourage bad habits (for example, posting in wrong groups, comments with a sexist undertone and excessive people tagging), to do conflict resolution and auditing, and to handle change management needs. Recruit a team for the ongoing creation of content. More importantly, listen to your employees’ voice on the intranet. 4. Make your intranet efficient Ensure minimal overheads for the end user. Identify the top tasks which employees perform repeatedly involving multiple systems, and which out of those are easy to accomplish on the intranet. Push your IT vendor to integrate it with other LOB, HR and admin systems, so that employees don’t have to duplicate their efforts. Get a process re-design expert to streamline your employee facing processes, and explore the possibility of putting those on the intranet portal. 5. Quick, iterative development The portal may be designed keeping in mind the utopian state (and ask your business to make a future proof wish list), however keep the implementation process agile. You could plot requirements on a chart of business criticality vs. technical ease of implementation, and decide on the sprint-wise product backlog efficiently. Maybe, it is as simple as implementing out-of-the-box features to start with, just to get things moving. Agree with your vendor on the key success parameters beforehand, and if possible, include the KPIs in the vendor payment milestones. Those parameters could include indicators on user adoption, user abandonment rate, the number of bugs, % downtime, etc. 6. Make sure you select the right intranet software There are scores of COTS business intranet products to choose from. You could go with the established names recognized by analyst firms such as Gartner or Forrester or you could choose among open source and licensed products, and you could choose among technologies – Java based, .Net based or PHP based. My suggestion – choose a platform agnostic vendor first, one who could help you choose the right platform based on your needs. Not a vendor bound to a particular product, who cannot possibly do an impartial evaluation. In the end, remember that the outcomes matter more than the platform. So whatever product you choose, the key is the quality of implementation and user adoption.