Analytics

Unlock the Value of Your Supply Chain Through Embedded Analytics

supply chain analytics

Over the past few months I have posted a couple of blogs relating to the use of analytics in the supply chain. The first one really discussed the ‘why’ in terms of the reasons for applying analytics to supply chain operations, Understanding the Basics of Supply Chain Analytics. The second blog discussed the ‘how’, in terms of the methods of obtaining meaningful insights from B2B transactions flowing between trading partners, Achieve Deeper Supply Chain Intelligence with Trading Grid Analytics. The blogs were written in support of our recently announced OpenText™ Trading Grid Analytics, one of the many Business Network related offerings in Release 16. Release 16 is the most comprehensive set of products to be released by OpenText to enable companies to build out their digital platforms and enable a better way to work. Now those that have followed my blogs over the years will know that I have worked with many analyst firms to produce white papers and studies and I guess it was only appropriate that I should be fortunate to work with an outside analyst on a thought leadership white paper relating to analytics in the supply chain. I engaged with IDC to write a paper entitled, Unlock the Value of Your Supply Chain Through Embedded Analytics. IDC has been producing some interesting content over the years in support of their ‘Third Platform’ model which embraces IoT, cloud, mobile and big data and how companies can leverage these technologies for increased competitive advantage. The aim of our new analytics related white paper was to discuss the business benefits of embedding analytics into the transaction flows across a business network. Compared to other business intelligence and end user analytics solutions, OpenText is in a unique position as we own our Business Network and we are able to introspect the 16 billion EDI transactions flowing across our network. IDC leveraged a relatively new management theory called VUCA which stands for Volatility, Uncertainty, Complexity and Ambiguity to discuss how analytics can bring better insights into business operations. VUCA was originally defined in the military field and for our paper IDC aligned VUCA so that it leverage against a more connected, information-centric and synchronized business network, namely Velocity, Unity, Coordination and Analysis. I am not going to highlight too much content from the paper but here is one interesting quote from the paper. “It is the view of IDC that the best supply chains will be those that have the ability to quickly analyze large amounts of disparate data and disseminate business insights to decision makers in real time or close to real time. Businesses that consistently fail to do this will find themselves at an increasing competitive disadvantage and locked into a reactionary cycle of firefighting. Analytics really will be the backbone of the future of the supply chain.” Now I am not going to spoil the party by revealing any more from the paper!, if you would like to learn more then please register for our webinar, details are provided below. If you would like to get further insights about the white paper then OpenText will be hosting a joint webinar with IDC on 27th July 2016 at 11 am EDT, 5pm CET. This 40 minute webinar will allow you to: Understand how embedded analytics can provide deeper supply chain intelligence Learn how the VUCA management theory can be applied to a supply chain focused analytics environment and the expected business benefits that can be obtained Find out why it is important to have trading partners connected to a single business network environment to maximize the benefits of applying analytics to supply chain operations Learn how OpenText can provide a cloud based analytics environment to support your supply chain operations You can register for the webinar here.

Read More

Top 10 Trends To Watch in Financial Analytics (Infographic)

financial analytics

General Motors was a company facing challenges when they hired Daniel Akerson as CEO in 2010. His background was tied to financial analytics so it’s not a surprise that they then acquired AmeriCredit  in an all-cash transaction valued at approximately $3.5 billion. Daniel Akerson commented that he started “creating world-class systems in our IT and financial systems, to quickly capitalize on opportunities and protect product programs from exogenous risk“.  Fortune magazine announced recently that General Motors was working on the future of driving, which it believes will be “connected, seamless and autonomous.” Chief Financial Officers are not renowned for underestimating the power of innovation and statistics, better known today as “analytics.” Accenture has defined the CFO as the “Architect of Business Value,” yet if we go back to 2014,  we may recall many companies had deep silos of inconsistently defined, structured data that made it difficult to extract information from different parts of the business. We all know that, so what’s new for financial analytics? 1) Big data is getting bigger with the Internet Of Things → Tweet this. Having accurate data is still critical. ′In God we trust, all others must bring data′ said engineer and statistician W. Edwards Deming. We all agree that not all data is useful, but the wider the data source range, the more accurate the 360º view; so the challenge in a world with 6.4 billion connected devices is even bigger and finance organizations are challenged. 2) Digital is killing the finance organization → Tweet this. Big data coming from new structured and unstructured data sources are increasing risk but also offering new opportunities. Growing opportunities to store and analyze data from disparate sources is driving many leaders to alter their decision-making process. The number of mobile banking users in the US is now 111 million, in 2010 there were just 35 million as a comparison and today banks adopt e-commerce tactics, tracking customers through mobile to reduce churn and increase sales opportunities. 3) Data gravity will pull financial analytics to the cloud → Tweet this. Cloud data warehousing is becoming widely adopted, so the question for Information Technology now is “When?” When your data is already in the cloud, you better have analytics right there. Having your data within OpenText™ Cloud has the benefit of being with the leader in Enterprise Information Management and being able to load data via the front-end or back-end within a columnar database. 4) Compliance audit is still a challenge → Tweet this. Self-service data preparation is becoming key to better data audit. Analysts are saying that the next big market disruption is self service data preparation and that by 2019 it will represent 9.7% of the global revenue opportunity of the business intelligence market. OpenText™ Big Data Analytics, for example, provides financial analysts with easy-to-use tools for self-service data preparation like the following: Data preparation – tools like normalization, linear scaling, logistic scaling or softmax scaling Data enrichment – aggregates, decodes, expressions, numeric ranges, quantile ranges, parametric or ranking Data audit – count, kurtosis, maximum, mean, median, minimum, mode, skewness, sum, or standard deviation 5) Organizations should focus analytics on decisions, not reporting → Tweet this. Analysts introduced the concept of the algorithmic business to describe the next stage of digital business and forecast that by 2018, over half of large organizations globally will compete using advanced analytics and proprietary algorithms, causing the disruption of entire industries. 6) Analytics is not only about reporting any more → Tweet this. Analytics is maturing and priorities are naturally focused on the business. Algorithms will not only provide insights and support decision-making, but will be able to make decisions for you. 49% of companies are using predictive analytics and a further 37% are planning to use it within 3 years. Predictive analytics with big data has been the cornerstone of a successful financial services organization for some time now. What’s new is how easily any business user can access predictive insights with tools like OpenText™ Big Data Analytics. According to Francisco Margarite, CIO at Inversis Bank, “extracting and analyzing information used to take a considerable amount of time of our technical experts, and as a consequence, we were not providing the appropriate service. Now, we have provided more user autonomy and reduced the work overload in IT.” Here are some examples of how analytics can help: Venn Diagram: Identify which customers that purchased “Insurance A” and “Insurance C” did not purchase “Insurance B”, so you can get a profile and a shared list. Profile: Identify variables that describe customers by what they are and what they are not, so your company can personalize campaigns. Decision Tree: Identify which real estate is the most appropriate for each prospect according to the profile and features of the product – price, size, etc. Human Resources will probably find it useful analyzing social data to recruit people who already have an affinity with the company or its products. Forecast: Anticipate changes, trends and seasonal patterns. You can accurately predict sales opportunities or anticipate a volume of orders so you can also anticipate any associated risks. You can use sensor data to predict the failure of an ATM cash withdrawal transaction for example. Subscribers are also able to run easy-to-use pre-built Clustering, Association Rules, Logistic Regressions, Correlations, or Naive Bayes. 7) Financial data is the leading indicator when using analytics to predict serious events → Tweet this. Recent cases like CaixaBank or MasterCard prove that there is growing interest in using analytics to get ahead of social and political developments that affect corporate, and sometimes national interests. Financial data is a leading indicator. Now, we can also look at big data from social media to determine sentiment. 8) What will make emotional analytics really helpful is to have a stronger analytics model behind it → Tweet this. Risk or fraud can now be identified by sentiment analysis on social media, so you could determine future actions. But, if you look at sentiment or emotion alone, it essentially flags a lot of false positives. 9) The need to manage, monitor, and troubleshoot applications in real-time has never been more critical → Tweet this. Digital transformation grows as the reliance on new software grows, so the need for full-stack visibility and agility is the true business demand underpinning the growth of analytics. 10) To know your customer and to deliver relevant data is a key business differentiator → Tweet this. TransUnion, for instance, reported profit for 2015 after booking losses since 2012, when moving from relational databases to predictive and embedded analytics. Before the overhaul, IT spent half its time maintaining older equipment. About 60% of capital spending went to managing legacy systems, now it′s 40% to 45%, they said. First things first – no matter if you are forecasting risk of fraud, crime, or financial outcomes, once you have the insight from predictive analytics, you will need a way to communicate this to the right person. 77% of companies cite predictive capabilities as one of top reasons to deploy self-service business intelligence, but increasing loyalty by improving customer satisfaction is also a very relevant reason. Firms can now deliver a tailored, personalized experience where customers enjoy self-service access to their own account data and history, and are proactively provided recommendations and information about additional products and services to: Drive increased revenue via cross selling, by automatically suggesting new products and services to targeted customers Engage customers by providing self-service access to personal views of account data on any device Increase loyalty by enabling customers to interactively customize how they analyze their financial data Realize a more rapid time to market by leveraging its APIs to embed new features in existing customer applications OpenText™ Information Hub is providing great embedded analytics according to Dresner, and also offers insights from predictive analysis because of its unique integration with OpenText Big Data Analytics. Research “Operationalizing and Embedding Analytics for Action” from TDWI The report notes that operationalizing and embedding analytics requires more than static dashboards that are updated once a day, or less. It requires integrating actionable insights into your applications, devices and databases. Download the report here.

Read More

How to Identify Return on Investment of Social Media Marketing (Infographic)

social media ROI

The B2B marketing leaders will be spending more money on technology than the CIO in 2017. Sure they may already spend a lot, but the interesting question here is: will they now finally be able to identify the revenue? It was not long ago when CMO’s were perfectly ok with having more and more new technology tools. The problem came when they were forced to maintain the tools and, also to measure the performance of those tools in order to prove that they were still needed. While each platform may include its own reporting tools, in an omnichannel world, having so many partial views of the truth makes little sense. Many business users and decision makers can’t see the forest for the trees when it comes to the current analytics environment. How will they manage then in an Analytics of Things of world with 6.4 billion connected devices? Are CMO’s prepared? According to a recent study on The State of Marketing Analytics by VentureBeat, “Analytics are key to showing value, yet the market is huge and fragmented.” Customers are no longer using a single channel to buy yet only 60% of marketers create integrated strategies. Probably the most painful source to measure is social media. It is painful because there’s no way that investors will accept engagement metrics such as impressions or likes as revenue. It is also painful because the CMO is being pushed by market analysts to invest more and more budget on social media. Social media spending is expected to climb to a 20.9% share of marketing budgets in 5 years even though analytics is not yet fully integrated or embedded according to a recent CMO Survey. ROI of social media in the form of adding new revenue sources, enhancing revenue sources and increasing revenues is more pronounced among companies with a more mature data-driven culture. The graph below shows where competitive advantage has been achieved as a result of data-driven marketing by stage of development according to Forbes. What’s worse for the laggards is that their immature analytics culture is resulting in the lowest profitability – they even don’t know often which analytics platforms they are paying for, according to the same research. “There is no substitute for hard work,” said Thomas Edison. In order to identify the ROI of social media marketing, we will go through the hard work by going through the requirements for your social media data below, including: Why you need integration and democratization of data in the cloud, including social media data Why you need self-service advanced and predictive analytics for your campaigns Why agility is so important when it comes to marketing analytics Why integrated reporting and insights should be easy to access by any marketer or business user Integration and democratization of data in the cloud, including social media data This is not just useful to get a single view of the truth. Working in an environment where each platform’s reporting is separate from the others, is not possible to calculate ROI of social media marketing. You may find a way to calculate the revenue of a Paid Per Click campaign in LinkedIn for example, but this is a very narrow view of what social media marketing is about. The first step to identify the real revenue of social media should be to integrate all disparate data sources in the cloud. They should be integrated, because that way you will be able to cross reference information, discover the real 360º customer view and the actual ROI. Also, it should be integrated in the cloud specifically so other business users can access and self-service the final insights. Some people may think they have a 360º customer view when they integrate Google Analytics and Salesforce to calculate the ROI of a paid campaign in Google Adwords. But they are still far from that view. According to Think with Google, for instance, customers in most industries in US click on a paid ad long after they were engaged in social. That means that a percentage of the ROI of the Pay Per Click campaign should be attributed to social to be accurate. What if instead of making assumptions you start to track which channel is the first, then which is next, and which is the latest? You can do this with tools like Piwik or Eloqua Insights because they track all different devices from which the customer is visiting your website as well as specific URLs they land on, and order the events by date. While this is fine if you have less than 200 sessions per day on your website, if you have more than that you will quickly understand why big data is more than just hype! Trust me, if you had the time to explore, analyze and export using Piwik or Eloqua Insights you would really know what patience means. With big data, even if you just want to use a selected small part of it, you need columnar database technology like the one used by OpenText™ Big Data Analytics. Having your data sources integrated at the start and managed by IT is fine, but when it comes to data audits, data cleansing and data enrichment you had better expect this to be self-service. B2B companies need to know as much as they can about the companies they market to and 75% of B2B marketers say that accurate data is critical for achieving their goals but lack of data on Industry, Revenue, and Employees is a problem in up to 87% of the examples. Bad data affects not just marketing but also sales, according to Forrester, executive buyers find that 77% of the sales people they meet don’t understand their issues and where they can help. As a result a lot of CMOs are taking the initiative to start profiling and creating more targeted leads so that sales don’t have this problem. Self-service advanced and predictive analytics for marketing campaigns According to a recent study by MDG Advertising, B2B organizations that utilize predictive analytics are 2x more likely to exceed their annual marketing ROI goal. The research offers interesting reasons why 89% of B2B organizations have predictive analytics on their roadmap. According to VentureBeat there are 3 main reasons why marketers aren’t that advanced in their analytical approaches – including skill gaps around data science. Easy-to-use tools can make it easier to run reports, but without a real understanding of data-driven approaches, the final report may not be accurate enough. Predictive lead scoring, for instance, can yield significant ROI and 90% of large organizations will have a Chief Data Officer or CDO by 2019. Meanwhile CMOs are not inactive and 55% of B2B organizations are already hiring for marketing analytics roles. Success in social media is not as easy as being 30 years old. You know that you will be 29 years old for 12 months and then it will automatically change to 30. This is easy to predict, but it is not scalable to social media. In terms of social media you need to go deeper than the surface to measure, for example, if it is worth having 30 social media accounts or maybe 10. You need to measure if it is worth having 30 blog articles or maybe 300. You will want to calculate if “Channel A” is worthwhile because it generated 300 conversions from unqualified leads or not. You may want to go further and identify which of the qualified leads bought “Product A” and “Product C” but haven’t yet bought “Product B”. Do you want better segmentations and profiles of those, so you can create a custom cross-selling campaign, based on information that you can get from LinkedIn, for instance? If so, you will need to go further than data visualization. Companies need advanced analytics to identify ROI. This is what will give you the insight on the ROI of your social media, but more than it could ensure success in the current digital era. You will find the following ad-hoc & pre-built tools at OpenText Big Data Analytics: Venn Diagram: Are you tired of reaching the wrong people? Smarter companies are reporting benefits doing data mining to target advanced segmentations and the most appropriate people with marketing material that resonates with them. Note that segmentations are based on data mining, but can be created by drag and drop of the database objects in the left column. Profile: Are you still re-marketing to visitors who landed on your website by mistake? Our drag and drop, easy-to-use tool is needed by marketing in the current customer-centric culture. B2B marketing goals for predictive analytics span the customer funnel including customer retention, customer lifetime value, customer effectiveness right up to customer acquisition – so having a customer profile is a must-have to begin. Association Rules: What if you could identify which users are likely to abandon with sentiment analysis of their activity in social media and help-desks – so you can reduce churn with a loyalty campaign? Would that help the ROI of your social media? You can find more predefined analysis screenshots and use case videos to help. Companies plan to increase spend on marketing analytics, but many will select the wrong capabilities or be unable to use them properly. Harvard Business Review alerts to this point “marketing analytics can have a substantial impact on a company’s growth, but companies must figure out how to make the best use of it”. Why agility is so important when it comes to marketing analytics You already know that advanced and predictive analytics is not new, if you think about how financial services has been using it. What’s really new is how easy analysis can be created as self-service now. In the real world, only 20% of organizations are able to deploy a model into operational use in less than 2 weeks according to TDWI, so don’t forget to ask for self-service and real-time advanced analytics. You will be happy that your analytics platform technology includes a columnar database when you get to this point. Why integrated reporting and insights should be easy to access by any marketer or business user 3 out of 4 marketers can’t measure and report on the contribution of their programs to the business. Isn’t that scary? To know the customer and how to deliver relevant data is a key business differentiator. Marketing analytics tools need to be more than a nicely displayed report, they need to allow decision makers to interact with the information according to McKinsey & Company. There’s a lot that has been written about the opportunities of using big data in supply chain and retail companies, and specifically the social media capabilities to reach their audiences. The retail analytics market is estimated to grow from 2.2 billion to 5.1 billion in 5 years but difficulty in sharing customer analytics is ranked as a top challenge by the Industry. Social media is a smart way to connect a customer with a specific local store, right? Instagram includes stats of impressions, reach, clicks and follower activity for businesses. There are a few tools with powerful capabilities to personalize, share and embedded HTML5 data visualizations available, but OpenText™ Information Hub is the only one that is tied to advanced and predictive analytics. 9 out of 10 sales and marketing professionals report the greatest departmental interest is in being able to access analytics within front-office applications and OpenText Information Hub is ranked as the top vendor in the latest Embedded Business Intelligence Market Study by analyst Howard Dresner – not without reason. Don’t forget to ask for an analytics platform that is perfectly fine to be scaled to unlimited users. Turn your social data into strategy, then gold Predictive analytics not only applies to what will happen next quarter, but also to what the user may want to find right now. Google doesn’t wait for you to make association rules that have probably helped you a few times – they make their big data work for millions of users in real-time. Now think about your company and one of your prospects sharing your content on social media or email. Do you create a campaign to track and nurture these actions? How do you react if a user won’t complete a form more than once? What do you do when one of your prospects is searching on your site having landed from a social media post about “Product A”? Are you able to identify that “Product C” will be the more likely purchase? There are musicians unhappy about piracy, but there are others tracking, mining and getting revenue from social data. Getting these insights on revenue before running new omnichannel campaigns will provide one voice communication, essential for a successful omnichannel strategy. What is also important: this will help with better data-driven decisions and greater return on investment of your social media budget. 86% of companies that deployed predictive analytics for two or more years saw increased marketing return on investment according to Forbes. Download the Research “Operationalizing and Embedding Analytics for Action” from TDWI The report notes that operationalizing and embedding analytics requires more than static dashboards that are updated once a day, or less. It requires integrating actionable insights into your applications, devices and databases. Download the report here.

Read More

CRM for BI? There’s No Substitute for Full-Powered Analytics

CRM

As CRM systems get more sophisticated, many people think these applications will answer all of their business questions – for example, “Which campaigns are boosting revenues the most?” or “Where are my opportunities getting stuck?” Unfortunately, those people are mistaken. You have a CRM Some vendors of CRM (or customer relationship management) platforms like to talk about their analytic capabilities. Usually, those CRM systems have reporting capabilities and let users make some dashboards or charts to follow the evolution of their business: tracking pipeline, campaign effectiveness, lead conversion, quarterly revenue and so on. But from the perspective of analytics, this is only the smallest fraction of the value of the information your CRM is capturing from your sales teams and your customers. A CRM system is perfect for its intended purpose: managing the relationship between your sales force and your customers (or potential customers) and all the information related to them. And yes, it gathers lots of data in its repository, ready to be mined. What you may not realize is that the tool that collects transactional data is not necessarily the best tool to take advantage of it. It is critical for a CRM to be agile and fast in the interactions with your sales force. If it’s not, it interferes with sales people selling, building relationships with customers, and contacting prospects. So an ideal CRM system should be architected to collect, structure, and deploy transactional data in a way that the platform can manage it easily. Here’s the bad news: this kind of agile, transactional data structure isn’t great for analytics. Complex questions and CRMs don’t match Some CRM vendors try to provide certain business intelligence capabilities to their applications, but they face a basic problem. Data that is optimized for quick transactional access is not prepared to be analyzed, and in this scenario there are lots of questions that can’t be answered easily: Which accounts have gone unattended in the last 30 days? Who owns specific accounts? Where are my opportunities getting stuck? At a given point in the sales cycle, which accounts are generating the most revenue? How profitable is this account? Which campaigns are influencing my opportunities? Which products are sold together most frequently? These are only a few of the hundreds of questions that can’t be answered only through the data and analytic techniques CRM offers. This becomes impossible if you want to blend this CRM data with other external data in a closed system like a CRM because CRM doesn’t have this capability. CRM and full-powered analytics software CRM is a perfect tool for some things. However, it isn’t up to the demands of addressing the ever more complex questions companies need answered. Getting a 360-degree view of their business shouldn’t be a block to growing and increasing revenue. OpenText™ Big Data Analytics helps companies blend and prepare their data, and provides a fast, agile and flexible toolbox to let business analysts look for answers to the questions that decision-makers require. CRM data is one important part of this equation: being more competitive using full-powered analytic software is another.

Read More

How to Identify the Return on Investment of Big Data (Infographic)

shutterstock_245965159-min

If you are a CIO, you know what goes on at a board of directors meeting. This is not the place to be confused when your CEO asks you a simple, commonly asked question, which requires a simple, accurate answer: “What is the ROI of Big Data costs?” Let’s be honest, Big Data is a big, painful issue. It is often recommended to use just a little information in your dashboard if you want to be heard. But at the same time, you want this information to be accurate, right? That’s when you are happy that your data is big. Why is data size so important? Data scientists, for instance, are always asking for big data because they know how predictive analysis can easily be inaccurate if there isn’t enough data on which to base your predictions. We all know this when it comes to the weather forecast – it is the same when it comes to risk anticipation or sales opportunities identification. What’s really new is how easily any software can access big data. If 90% of the world’s data today has been created in the last 2 years alone, what can we expect in the near future? For starters, the Internet of Things is anticipated to hugely increase the volumes of data businesses will have to cope with. ROI is all about results. Big data is here to stay and bigger data is coming, so the best we can do is to make it worth the trouble. “Cloud is the new electricity,” according to the latest IT Industry Outlook published by CompTIA. But I don’t have good news for you, if you feel comfortable just planning to move your data to the cloud. This is just the beginning. Experts often say that big data doesn’t have much value when simply stored; your spending on big data projects should be driven by business goals. So it’s not a surprise that there is increased interest in gaining insights from big data and making data-based decisions. In fact, advanced analytics and self-service reporting is what you should be planning for your big data. I’ll briefly tell you why: You need to support democratization of data integration and data preparation in the cloud You should enable software for self-service advanced and predictive analytics Big data insights and reporting should be put where they’re needed Why support democratization of data integration and data preparation in the cloud Big data analytics, recruiting talent and user experience top the CIO agenda for 2016, according to The Wall Street Journal; but these gaps will hardly be solved in time, because of the shortage of data-savvy people. Actually, according to analysts, there is an anticipated 100,000+ analytic talent shortage of people through to 2020. So, meanwhile CIOs find solutions to their own talent gaps; new software and cloud services appear in the market to enable business users to get the business insights and advance ROI of big data. Hopefully, someday, a data scientist can provide those insights but platforms like OpenText™ Big Data Analytics includes easy-to-use, drag-and-drop features to load and integrate different data sources from the front end or the back end, in the cloud. Now, I say hopefully because requirements for data scientists are no longer the same. Knowledge of coding is often not required. According to Robert J. Lake, what he requires from data scientists at Cisco is to know how to make data-driven decisions – that’s why he leaves data scientists to play with any self-service analytics tool that may help them to reach that goal. Data scientists spend around 80% of their time preparing data, rather than actually getting insights from it – so interest in self-service data preparation is growing. Leaving the data cleansing to data scientists may be a good idea for some of their colleages, but actually it is not a good idea in terms of agility and accuracy. That’s the reason why cloud solutions like Salesforce are appreciated, because it leaves sales people time to collaborate – adding, editing or removing information that will give a more precise view of their prospects, one that only they are able to identify with such precision. What if you could expect the same from a Supply Chain Management or Electronic Health Record, where data audits depends on multiple worldwide data sources, with distinct processes and with no dependency on data experts at all? In fact, 95% of organizations want end users to be able to manage and prepare their own data, according to noted market analyst Howard Dresner. Analysts predict that the next big market disruption is self-service data preparation, so expect to hear more about it in the near future. Why you should enable self-service advanced and predictive analytics Very small businesses may find desktop tools like Excel good enough for their data analysis, but after digital disruption these tools have become inadequate even for small firms. The need for powerful analytic tools is even greater for larger companies from data-intensive industries such as telecommunications, healthcare, or government. The columnar database has been proposed as the solution, as it is much speedier than relational databases when querying hundreds of millions or billions of rows. Speed of a cloud service is dependent on the volume of data as well as the hardware itself. Measuring the speed of this emerging technology is not easy but even a whole NoSQL movement is advising that relational databases are not the best future option. Companies have been able to identify the ROI of big data using predictive analytics to anticipate risk or forecast opportunities for years. For example, banks, mortgage lenders, and credit card companies use credit scoring to predict customers’ profitability. They have been doing this even when complex algorithms require data scientists, hard-to-find expertise, not just to build but to keep them running. That limits their spread through an organization. That’s why OpenText™ Big Data Analytics in the Cloud includes ad-hoc and pre-built algorithms like: Profile: If you are able to visualize a Profile of a specific segment of your citizens, customers or patients and then personalize a campaign based on the differentiation values of this segment, why would the ROI of the campaign not be attributed to the big data that previously stored it? Forecasting: If the cloud application is able to identify cross-selling opportunities and a series of campaigns are launched, the ROI of those campaigns could be attributed to the big data that you previously secured Decision Tree: You should be able to measure the ROI of a new process based on customer risk identification during the next fiscal year and attribute it to big data that you previously stored in the cloud Association Rules: You can report the ROI of a new recruitment requirement based on an analysis of job abandonment information and attribute it to big data that you had previously enabled as a self-service solution The greater the number of stars shown on the Forecast screenshot above, the stronger the evidence for non-randomness. This is actually when you are grateful for having so much information and having it so clean! Customer analytics for sales and marketing provide some of the classic use cases. Looking at the patterns from terabytes of information on past transactions can help organizations identify the reasons behind customer churn, the ideal next offer to make to a prospect, detect fraud, or target existing customers for cross-selling and up-selling. Put Big Data insights and reporting where they’re needed Embedded visualizations and self-service reporting are key to allow the benefits of data-driven decisions into more departments, because it doesn’t require expert intervention. Instead, non-technical users can spontaneously “crunch the numbers” on business issues as they come up. Today 74% of marketers can’t measure and report on the contribution of their programs to the business according to VisionEdge Marketing. Imagine that you as a CIO have adopted a very strong advanced analytics platform, but the insights are not reaching the right people – that is, in case of a hospital, the doctor or the patient. Let’s say the profile of the patient and drug consumption is available in someone’s computer, but that insight is not reachable by any user who can make the difference when a new action is required. The hospital’s results will never be affected in that case by big data and the ROI potential will not be achieved because the people who need the insights are not getting them, and the hospital will not change with or without big data. This is called invisible analytics. Consider route optimization of a Supply Chain – the classic “traveling salesman problem.” When a sizable chunk of your workforce spends its day driving from location to location (sales force, delivery trucks, maintenance workers), you want to minimize the time, miles, gas, and vehicle wear and tear, while making sure urgent calls are given priority. Moreover, you want to be able to change routes on the fly – and let your remote employees make updates in real-time, rather than forcing them to wait for a dispatcher’s call. Real-time analytics and reporting should be able to put those insights literally in their hands, via tablets, phones, or smart watches, giving them the power to anticipate or adjust their routes. OpenText™ Information Hub offers these capabilities as a powerful ad-hoc and built-in reporting tool that enables any user to personalize how their company wants information and the data visualization to be displayed. You should always ensure that the security and scalable capabilities of the tool you need is carefully selected, because in such cases you will be dealing not only with billions of rows, but also maybe millions of end users. As mentioned at the start of this blog, user experience is also at the top of the CIO’s agenda. True personalization that ensures the best user experience requires technology that can be fully branded and customized. The goal should be to adapt data visualizations to the same look and feel as the application to provide a seamless user experience. UPS gathers information at every possible moment and stores over 16 petabytes of data. They make more than 16 million shipments to over 8.8 million customers globally, receive on average 39.5 million tracking requests from customers per day, employ 399.000 people in 220 different countries. They spend $1 billion a year on big data but their revenue in 2012 was $ 54.1 billion. Identification of the ROI of big data is dependent on the democratization of the business insights coming from advanced and predictive analytics of that information. Nobody said it is simple but it can lower operating costs and boost profits, which every business users identifies as ROI. Moreover when line-of-business users rather than technology users are driving the analysis, and the right people are getting the right insight when they need it, improved future actions should feed the wheel of big data with the bigger data that is coming. And sure you want it to come to the right environment, right? Download the Internet of Things and Business Intelligence by Dresner The Internet of Things and Business Intelligence from Dresner Advisory Services is a 70-page research that provides a wealth of information and analysis, offering value to consumers and producers of business intelligence technology and services. The business intelligence vendor ratings include scores for location intelligence, end-user data preparation, cloud BI, and advanced and predictive analytics–all key capabilities for business intelligence in an IoT context. Download here.

Read More

Big Data: The Key is Bridging Disparate Data Sources

Big Data

People say Big Data is the difference between driving blind in your business and having a full 360-degree view of your surroundings. But, adopting big data is not only about collecting data. You don’t get a Big Data club card just for changing your old (but still trustworthy) data warehouse into a data lake (or even worse, a data swamp). Big Data is not only about sheer volume of data. It’s not about making a muscular demonstration of how many petabytes you stored. To make a Big Data initiative succeed, the trick is to handle widely varied types of data, disparate sources, datasets that aren’t easily linkable, dirty data, and unstructured or semi-structured data. At least 40% of the C-level and high-ranking executives surveyed in the most recent NewVantage Partners’ Big Data Analytics Survey agree. Only 14.5% are worried about the volume of the data they’re trying to handle. One OpenText prospect’s Big Data struggle is a perfect example of why the key challenge is not data size but complexity. Recently, OpenText™ Analytics got an inquiry from an airline that needed better insights in order to head off customer losses. This low-cost airline had made a discovery about its loyal customers. Some of them, without explanation, would stop booking flights. These were customers that used to fly with them every month or even every week, but were now disappearing unexpectedly. The airline’s CIO asked why this was happening. The IT department struggled to push SQL queries against different systems and databases, exploring common scenarios for why customers leave. They examined: The booking application, looking for lost customers (or “churners”). Who has purchased flights in previous months but not the most recent month? Which were their last booked flights? The customer service ticketing system to find if any of the “churners” found in the booking system had a recent claim. Were any of those claims solved? Closed by the customer? Was there any hint of customer dissatisfaction? What are the most commonly used terms in their communications with the airline – for example, prices? Customer support? Seats? Delays? And what was the tone or sentiment around such terms? Were they calm or angry?  Merely irked, or furious and threatening to boycott the airline? The database of flight delays, looking for information about the churners’ last bookings. Were there any delays? How long? Were any of these delayed flights cancelled? Identifying segments of customers who left the company during the last month, whether due to claims unresolved or too many flights delayed or canceled, would be the first step towards winning them back. So at that point, the airline’s IT department’s most important job was to answer the CIO’s question – May I have this list of customers? The IT staff needed more than a month to get answers to these questions, because the three applications and their databases didn’t share information effectively. First they had to move long lists of customer IDs, booking codes, and flight numbers from one system to another. Then repeat the process when the results weren’t useful. It was a nightmare crafted of disperse data, complex SQL queries, transformation processes, and lots of efforts – and it delivered answers too late for the decision-maker. A new month came with more lost customers. That’s when the airline realized it needed a more powerful, flexible analytics solution that could effortlessly draw from all its various data sources. Intrigued by the possibilities of OpenText Analytics, it asked us to demonstrate how we could solve its problems. Using Big Data Analytics, we blended the three disparate data sources. In just 24 hours we were able to answer the questions and OpenText™ Big Data Analytics had worked its magic. The true value of Big Data is getting answers out of data coming from several diverse sources and different departments. This is the pure 360-degree view of business that everyone is talking about. But without an agile and flexible way to get that view, value is lost in delay. Analytical repositories that use columnar technologies – i.e., what OpenText Analytics solutions are built on – are there to help answer questions fast when a decision-maker needs answers to business challenges.

Read More

Analytics at 16 — Better Insight Into Your Business

Analytics

It’s been 16 years since the dawn of Business Intelligence version 2.0. Back then, the average business relied on a healthy diet of spreadsheets with pivot tables to solve complex problems such as resource planning, sales forecasting and risk reporting. Larger organizations lucky enough to be cash-rich employed data scientists armed with enterprise-grade BI tools to give them better business insights. Neither method was perfect. Data remained in silos, accessible in days, not minutes. Methodologies such as Online Analytical Processing (OLAP), extract-transform-load (ETL), and data warehousing were helpful in computing and storing this data, but limitations on functionality and accessibility remained. Fast forward to today. Our drive to digitize analytics and provide a scalable platform creates opportunities for businesses to use and access any data source, from the simplest flat files, to the most complex databases, and online data. Advanced analytics tools now come as standard with connectors for multiple disparate data sources and a remote data provider option for loading data from a web address. These improvements in business analytics capabilities provide industry analysts with a rosy outlook for the BI and analytics market. One is forecasting global revenue in the sector to reach $16.9 billion this year, an increase of 5.2 percent from 2015. A Better Way to Work While business leaders are clamouring for more modern analytics tools, what do key stakeholders — marketers and the business analysts who support them, end-users, and of course IT and their development teams — really want in terms of outcomes? Simple: businesses want their analytics easy-to-use, fast, and agile. Leading technology analysts have commented that the shift to the modern BI and analytics platform has now reached a tipping point, and that transitioning to a modern BI platform provides the opportunity to create business value from deeper insights into diverse data sources. Over the last few years, OpenText has established its Analytics software to serve in the context of the application (or device, or workflow) to deliver personalized information, drive user adoption, and delight customers. Our recent Release 16 of the Analytics Suite is helping to enable our vision of using analytics as “A Better Way to Work.” The Analytics Suite features common, shared services between the two main products — OpenText™ Information Hub (iHub) and OpenText™ Big Data Analytics (BDA) — such as single sign-on, single security model, common access, and shared data. Additionally, iHub accesses BDA’s engine and analysis results to visualize them. The solution includes broadly functional APIs based on REST and JavaScript for embedding. Both iHub and BDA are available deployed either on-premises or as a managed service to serve business and technical users. Understand and Engage This focus drives our approach. At a high level, we enable two key use cases (as illustrated below). First, advanced analytics harnesses the power of your data to help you better understand your market or your customers (or factory or network in an Internet of Things scenario). Second, you engage with these users and decision makers with data-driven visual information like charts, reports, and dashboards—on the app or device of their choice. Whether you are looking to build smarter customer applications or harness the power of your big data to beat the competition to market, analytics is the bridge between your digital strategy and business insights that drive smarter decisions—for every user across the enterprise. Check out the Analytics Suite today and gain deeper insight into your business.

Read More

Achieve Deeper Supply Chain Intelligence with Trading Grid Analytics

supply chain analytics

In an earlier blog I discussed how analytics could be applied across supply chain processes to help businesses make more informed decisions relating to their trading partner communities. Big Data analytics has been used across supply chain operations for a few years, however the real power of analytics can only be realized if it is actually applied across the transactions flowing between trading partners. Embedding analytics to transaction flows allows companies to get a more accurate ‘pulse’ of what is going on across supply chain operations. In this blog, I would like to introduce a new offering as part of our Release 16 launch, OpenText™ Trading Grid Analytics. The OpenText™ Business Network processes over 16 billion EDI related transactions per year and this provides a rich seam of information to mine for improved supply chain intelligence. Last year,OpenText expanded its portfolio of Enterprise Information Management solutions with the acquisition of an industry leading embedded analytics company. The analytics solution that OpenText acquired is being embedded within a number of cloud-based SaaS offerings that are connected to OpenText’s Business Network. Trading Grid Analytics provides the ability to mine transaction flows for both operational and business specific metrics.  I explained the difference between operational and business metrics in my previous blog, but just to recap here briefly: Operational metrics can be defined as: delivering transactional data intelligence and volume trends needed to improve operational efficiencies and drive company profitability. Business metrics can be defined as: delivering the business process visibility required to make better decisions faster, spot and pursue market opportunities, mitigate risk and gain business agility. Trading Grid Analytics will initially offer a total of nine out-of-the-box metrics (covering EDIFACT and ANSI X12 based transactions), which will be made up of two operational and seven business metrics, all of which are displayed in a series of highly graphical reporting dashboards. Operational Metrics Volume by Document Type – Number and type of documents sent and received over a period of time (days, months, years) Volume by Trading Partners – Number and type of documents sent and received, ordered by top 10 and bottom 10 partners Business Metrics ASN Timeliness – Number of timely ASN creation instances as a percentage of total ASNs for a time period Price Variance – The actual invoiced cost of a purchased item, compared to the price at the time of order Invoice Accuracy – Measures whether invoices accurately reflect orders placed in terms of product, quantities, and price by supplier, during a specified period of time Quantity Variance – The remaining quantity to be invoiced from a purchase order, equalling the difference between the quantity delivered and the quantity invoiced for goods received Order Acceptance – Fully acknowledged POs as a percentage of total number of POs within a given period of time Top Partners by Spend – Top trading partners by the economic spend over a period of time Top Products by Spend – Top products by economic spend over time Supply chain leaders and procurement professionals need an accurate picture of what is going on across their trading partner communities so that they can, for example, identify leading trading partners and have information available to support the negotiation of new supply contracts. Trading Grid Analytics is a cloud-based analytics platform that offers: Better Productivity – Allows any transaction related issues to be identified and resolved more quickly Better Insight – Deeper insights into transactional and supply chain information driving more informed decisions Better Control – Improved visibility to exceptions and underperforming partners allows corrective action to be taken earlier in a business process Better Engagement – Collaborate more closely with top partners and mitigate risk with under-performing partners Better Innovation – Cloud-based reporting portal provides access any time, any place or anywhere More information about Trading Grid Analytics is available here. You can also learn more about the benefits of supply chain analytics.

Read More

Unstructured Data Analytics: Replacing ‘I Think’ With ‘We Know’

shutterstock_306841991_unstructured_data

Anyone who reads our blogs is no doubt familiar with structured data—data that is neatly settled in a database. Row and column headers tell it where to go, the structure opens it to queries, graphic interfaces make it easy to visualize.  You’ve seen the resulting tables of numbers and/or words everywhere from business to government and scientific research. The problem is all the unstructured data, which some research firms estimate could make up between 40 and 80 percent of all data.  This includes emails, voicemails, written documents, PowerPoint presentations, social media feeds, surveys, legal depositions, web pages, video, medical imaging, and other types of content. Unstructured Data, Tell Me Something Unstructured data doesn’t display its underlying patterns easily. Until recently, the only way to get a sense of a big stack of reports or open-ended survey responses was to read through them and hope your intuition picked up on common themes; you couldn’t simply query it. But over the past few years, advances in analytics and content management software have given us more power to interrogate unstructured content. Now OpenText is bringing together powerful processing capacities from across its product lines to create a solution for unstructured data analytics that can give organizations a level of insight into their operations that they might not have imagined before. Replacing Intuition with Analytics The OpenText solution for unstructured data analytics has potential uses in nearly every department or industry. Wherever people are looking intuitively for patterns and trends in unstructured content, our solution can dramatically speed up and scale out their reach.  It can help replace “I feel like we’re seeing a pattern here…” with “The analytics tell us customers love new feature A but they’re finding new feature B really confusing; they wonder why we don’t offer potential feature C.”  Feel more confident in your judgment when the analytics back you up. The Technology Under the Hood This solution draws on OpenText’s deep experience in natural language processing and data visualization.  It’s scalable to handle terabytes of data and millions of users and devices. Open APIs, including JavaScript API (JSAPI) and REST, promote smooth integration with enterprise applications.  And it offers built-in integration with other OpenText solutions for content management, e-discovery, visualization, archiving, and more. Here’s how it works: OpenText accesses and harvests data from any unstructured source, including written documents, spreadsheets, social media, email, PDFs, RSS feeds, CRM applications, and blogs. OpenText InfoFusion retrieves and processes raw data; extracts people, places, and topics; and then determines the overall sentiment. Visual summaries of the processed information are designed, developed, and deployed on OpenText Information Hub (iHub). Visuals are seamlessly embedded into the app using iHub’s JavaScript API. Users enjoy interactive analytic visualizations that allow them to reveal interesting facts and gain unique insights from the unstructured data sources. Below are two common use cases we see for the OpenText solution for unstructured data analytics, but more come up every day, from retail and manufacturing to government and non profits.  If you think of further ways to use it, let us know in the comments below. Use Case 1: On-Demand Web Chat A bank we know told us recently how its customer service team over the past year or two had been making significantly more use of text-based customer support tools—in particular pop-up web chat. This meant the customer service managers were now collecting significantly more “free text” on a wide range of customer support issues including new product inquiries, complaints, and requests for assistance. Reading through millions of lines of text was proving highly time-consuming, but ignoring them was not an option. The bank’s customer service team understood that having the ability to analyze this data would help them spot and understand trends (say, interest in mortgage refinancing) or frequent issues (such as display problems with a mobile interface). Identifying gaps in offerings, common problems, or complaints regarding particular products could help them improve their overall customer experience and stay competitive. Use Case 2: Analysis of Complaints Data Another source of unstructured data is the notes customer service reps take while on the phone with customers. Many CRM systems offer users the ability to type in open-ended comments as an addition to the radio buttons, checklists, and other data structuring features for recording complaints, but they don’t offer built-in functionality to analyze this free-form text.  A number of banking representatives told us they considered this a major gap in their current analytics capabilities. Typically, a bank’s CRM system will offer a “pick list” of already identified problems or topics that customer service reps can choose from, but such lists don’t always provide the level of insight a company needs about what’s making its customers unhappy.  Much of the detail was captured in unstructured free-text fields that they had no easy way to analyze.  If they could quickly identify recurring themes, the banks felt they could be more proactive about addressing problems. Moreover, the banks wanted to analyze the overall emotional tone, or sentiment, of these customer case records and other free-form content sources, such as social media streams. Stand-alone tools for sentiment analysis do exist, but they are generally quite limited in scope or difficult to customize.  They wanted a tool that would easily integrate with their existing CRM system and combine its sentiment analysis with other, internally focused analytics and reporting functions—for example, to track changing consumer sentiment over time against sales or customer-service call volume. A Huge, Beautiful Use Case: Election Tracker ‘16 These are just two of the many use cases for the OpenText solution for unstructured data analytics; we’ll discuss more in future blog posts. You may already be familiar with the first application powered by the solution: the Election Tracker for the 2016 presidential race. The tracker, along with the interesting insights it sifts from thousands of articles about the campaign, has been winning headlines of its own. Expect to hear more about the Election Tracker ’16 as the campaign continues. Meanwhile, if you have ideas on other ways to use our Unstructured Data Analytics solution in your organization, leave them in the comments section.

Read More

The ‘It’ Role in IT: Chief Data Officer

shutterstock_295153469 (1)

A recent analyst report predicts that the majority of large organizations (90%) will have a Chief Data Officer by 2019. This trend is driven by the competitive need to improve efficiency through better use of information assets. To discuss this evolving role and the challenges of the CDO, Enterprise Management 360 Assistant Editor Sylvia Entwistle spoke to Allen Bonde, a long-time industry watcher with a focus on big data and digital disruption. Enterprise Management 360: Why are more and more businesses investing in the role of Chief Data Officer now? Bonde: No doubt the Chief Data Officer is kind of the new ‘It’ role in the executive suite, there’s a lot of buzz around this role. Interestingly enough, it spans operations, technology, even marketing, so we see this role in different areas of organizations. I think companies are looking for one leader to be a data steward for the organization, but also they’re looking for that same role to be a bit more focused on the future – to be the driver of innovation driven by data. So, you could say this role is growing because it’s at the crossroads of digital disruption and technology, as well as the corporate culture and leadership needed to manage in this new environment. It’s a role that I think could become the new CIO in certain really data-centric businesses. It also could become the new Chief Digital Officer in some cases. If you dig a little bit into responsibilities, it’s really about overseeing data from legacy systems and all of your third-party partners as well as managing data compliance and financial reporting. So it’s a role that has both an operational component as well as a visionary, strategic component. This is where maybe it departs from that purely technical role where there’s almost a left brain, right brain component of the CDO, empowered by technology, but ultimately it’s about people using technology in the right way, in a productive way to move the business forward. Enterprise Management 360: What trends do you think will have the biggest impact on the Chief Data Officer in 2016? Bonde: In terms of the drivers, it’s certainly about the growth of digital devices and data, especially coming from new sources. So there’s a lot of focus on device data with IoT or omni-channel data coming from the different touch points in the customer journey. There are these new sources of data and even if we don’t own them per se, they’re impacting our business, so I think that’s a big driver. Then there’s this notion of how new devices are shifting consumption models. So if you think about just a simple smartphone, this device is both creating and consuming data, changing the dynamic of how individuals are creating and interacting with their consuming data. There’s a whole back-drop to this from a regulatory perspective with privacy and other risk factors. Those are certainly drivers that are motivating companies to say “we need an individual or an office to take the lead in balancing the opportunity of data with the risk of data and making sure that we don’t, number one: get in trouble as a business but number two: we take advantage of what opportunity is in front of us.” Enterprise Management 360: Honing in on the data side of things – what are the most important levers for ensuring a return on data management and how do you think those returns should be measured? Bonde: A common thread across wherever the CDO is in the organization is going to be focused on outcomes and yes, it’s about technology; yes, it’s about consumer adoption. In terms of the so-called CDO agenda, I think that outcomes need to be pretty crisp; for example, we need to lower our risk profile or we need to improve our margins for a certain line of business or we’re all about bringing a product to market and so I think focusing on outcomes and getting alignment with those outcomes is the first most important lever that you have as a CDO. The second one I would argue is adoption and you can’t do anything at scale with data if you don’t have widespread adoption. The key to unlocking the value of data as an asset is driving the broadest adoption, so that most people in an organization including your customers and your partners get value out of the insights that you’re collecting. Ultimately this is about focusing on delivering insight into the everyday work that the organization is doing, which is very different to classic business intelligence or even big data, which used to be the domain of a relatively small number of people within the organization, who were responsible for parceling out the findings as they saw fit. I think the CDO is breaking down those walls, but this also means the CDO is facing a much bigger challenge than just simply getting BI tools in the hands of a few business analysts. Enterprise Management 360: A lot of people are seeing big data as a problem. Where would you draw the line between the hype and the fact of big data? Bonde: It’s a funny topic to me, having started earlier in my career as a data scientist working with what then was very large data sets and seeing if we could manage risk. This was in a telco, and we didn’t call it big data then, but we had the challenge of lots of data from different sources and trying to pull the meaning out, so that we could make smarter decisions about who might be a fraudulent customer or which markets we could go after. When you talk to CDO’s about big data I think that the role has benefited from hype around big data, because big data set the table for the need for a CDO. Yet CDO’s aren’t necessarily just focused on big data and in fact, one CDO we were talking to expressed, “we have more data than we know what to do with.” So they acknowledged the fact that they already have lots of that data, but their main struggle was in understanding it. It wasn’t necessarily a big data problem, it was a problem of finding the right data, and we see this day-to-day when we work with different clients; you can do an awful lot in terms of understanding by blending different data sources, not necessarily big data sources, but just different types of data and you can create insight from relatively small data sets if it’s packaged and delivered in the right way. IoT is a perfect example, people are getting excited about the challenges that managing data in the era of IoT will present, but it’s not really a big data problem, it’s a multi-source data problem. So I think the hype of big data has been useful for the market to get aligned with the idea of the importance of data, certainly the value of it when it’s collected, blended, cleaned and turned into actual insights. But in a way the big data problem has shifted to more a question of, “how do we get results quickly?” Fast is the new ‘big’ when it comes to data. I think that getting results quickly, transcends the ultimate result. If we can get a good, useful insights out quickly, that’s better than the perfect model or the perfect result. We hear this from CDO’s that they want to work in small steps, they want to fail fast. They want to run experiments that show the value of applying data in the frame of a specific business problem or objective. So to them big data may be creating their role but on a day-to-day basis it’s more about fast data or small data and blending lots of different types of data and then putting that into action quickly, which is where I think the cloud is as important as big data is to the CDO and cloud services like, Analytics-as-a-Service. Now, it may deal with big data but it almost doesn’t matter what the size of the data is; it’s how quickly you can apply it for a specific business problem. The CDO could be that keeper of the spectrum of how you’re collecting, securing, imagining your data but ultimately they’ll be judged by how successful they are at turning that data into practical insights for the everyday worker. That’s where the ROI and return on data management is across the whole spectrum, but ultimately if people can’t put the insights into use in a practical fashion and make it easy, it almost doesn’t matter what you’ve done at the back end. Read more about the Chief Data Officer’s role in a digital transformation blog by OpenText CEO, Mark Barrenchea.

Read More

Unstructured Data Analysis – the Hidden Need in Health Care

shutterstock_53294344_steth_keybd

The ‘Hood I recently had the opportunity to attend the HIMSS 2016 conference in Las Vegas, one of the largest annual conferences in the field of health care technology. As I walked through the main level of the Exhibit Hall, I was amazed at the size of some vendor booths and displays. Some were the size of a house. With walls, couches, and fireplaces, they really seemed like you could move in! I was working at the OpenText booth on the Exhibit Hall level below the main one, which was actually a converted parking garage floor. Some exhibitors called this lower level “The ‘Hood”.  I loved it, though; people were friendly, the displays were great, and there were fresh-baked cookies every afternoon. I don’t know how many of the nearly 42,000 conference attendees I talked to, but I had great conversations with all of them. It’s an awesome conference to meet a diverse mix of health care professionals and learn more about the challenges they face. The Trick Half of the people I talked to asked me about solutions for analyzing unstructured data. When I asked them what kind of unstructured data they were looking to analyze, 70% of them said claim forms and medical coding. This actually surprised me. As a software developer with a data analysis background, I admit to not being totally up on health care needs. Claim forms and medical coding to me have always seemed very structured. Certain form fields get filled in on the claim form and rigid medical codes get assigned to particular diagnoses and treatments. Seems straightforward, no? What I learned from my discussions was that claims data requires a series of value judgments to improve the data quality.  I also learned that while medical coding is the transformation of health care diagnosis, procedures, medical services, and equipment into universal medical codes, this information is taken from transcriptions of physicians’ notes and lab results. This unstructured information is an area where data analysis can help immensely. The trick now is “How do we derive value from this kind of data?” The Reveal OpenText has an unstructured data analysis methodology that accesses and harvests data from unstructured sources. Its InfoFusion and Analytics products deliver a powerful knockout combination. The OpenText InfoFusion product trawls documents and extracts entities like providers, diagnoses, treatments and topics. It can then apply various analyses, such as determining sentiment. The OpenText Analytics product line can then provide advanced exploration and analysis, dashboards, and reports on the extracted data and sentiment. It then provides secure access throughout the organization through deployment on the OpenText Information Hub (iHub). Users will enjoy interactive analytic visualizations that will allow them to gain unique insights from the unstructured data sources. The Leave-Behind If you’re interested in learning more about our solution for unstructured data analytics, you can see it in action in this application, www.electiontracker.us. While this is not a health care solution, it demonstrates the power of unstructured data analysis that allows users to visually monitor, compare, and discover interesting facts. If you’re interested in helping me develop an example using claim forms or medical coding data, please contact me at dmelcher@opentext.com. I definitely want to demonstrate this powerful story next year at the HIMSS conference. See you next year in the ‘Hood!

Read More

Security Developments: It’s All About Analytics

shutterstock_253346236

Analytics is everywhere. It is on the news, in sports, and of course, part of the 2016 US elections.  I recently attended the RSA Conference 2016, an important trade show for security software solutions, because I wanted to see how security vendors were using analytics to improve their offerings. Roaming through hall after hall of exhibits, I saw some interesting trends worth sharing. One of the first things I noticed was how many times analytics was mentioned in the signage of different vendors. I also noticed a wide range of dashboards showing all different types of security data. (With this many dashboards you’d think you were at a BI conference!) You see, security is no longer just about providing anti-virus and anti-malware protection in a reactive mode. Security vendors are utilizing cybersecurity and biometric data to try to understand and mount defenses in real-time when an attack is happening. To do this, they need to analyze large amounts of data. This made me realize what some analysts are predicting. It isn’t the data that has the value, it is the proprietary algorithms. Smarter Analytics = Stronger Security This is definitely true in the security space. Many vendors are providing the same types of service; one of the ways they can differentiate themselves is the algorithms they use to analyze the data. They have to gather a large amount of data to get baselines of network traffic. Then they use algorithms to analyze data in real-time to understand if something is happening out of the norm. They hope to spot when an attack is happening at a very early stage, so they can take action to stop and limit damage before it can shut down a customer’s network or website. This is why algorithms are important. Two different products may be looking at the same data, but one detects an attack before the other. This, to me, has big data analytics written all over it. Security vendors are also paying attention to analytics from the IoT (Internet of Things). A typical corporate data security application gathers a lot of data from different devices – network routers and switches, servers, or workstations, just to name a few. The security app will look at traffic patterns and do deep packet inspection of what is in the packets. An example would be understanding what type of request is coming to a specific server: What port is it asking for and where did the request originate from?  This could help you understand if someone is starting a DoS (Denial of Service) attack of probing for a back door into your network or server. What can we learn from the trends on display at RSA this year? I think they show how analytics can help any business, in any industry. Dashboards are still very popular and efficient in displaying data to users to allow them to understand what is happening, and then make business decisions based on that data. And, not all advanced analytic tools are equal, beecause it is not about the data but whether their algorithms can help you use that data to understand what is happening, and make better business decisions. OpenText Analytics provides a great platform for businesses to create analytic applications, and use data to make better decisions faster. To get an idea of what OpenText Analytics can do, take a look at our Election Tracker ’16 app.  

Read More

Wind and Weather – Data Driven Digest

shutterstock_117056179_windfarm

It’s the beginning of March, traditionally a month of unsettled early-spring weather that can seesaw back and forth between snow and near-tropical warmth, with fog, rain, and windstorms along the way. Suitably for the time of year, the data visualizations we’re sharing with you this week focus on wind and weather. Enjoy! You Don’t Need a Weatherman… Everyone’s familiar with the satellite imagery on the weather segment of your nightly TV news. It’s soothing to watch the wind flows cycle and clouds form and dissipate.  Now an app called Windyty lets you navigate real-time and predictive views of the weather yourself, controlling the area, altitude, and variables such as temperature, air pressure, humidity, clouds, or precipitation.  The effect is downright hypnotic, as well as educational – for example, you can see how much faster the winds blow at higher altitudes or watch fronts pick up moisture over oceans and lakes, then dump it as they hit mountains. Windyty’s creator, Czech programmer Ivo Pavlik, is an avid powder skier, pilot, and kite surfer who wanted a better idea of whether the wind would be right on days he planned to pursue his passions. He leveraged the open-source Project Earth global visualization created by Cameron Beccario (which in its turn draws weather data from NOAA, the National Weather Service, other agencies, and geographic data from the free, open-source Natural Earth mapping initiative). It’s an elegant example of a visualization that focuses on the criteria users want as they query a very large data source. Earth’s weather patterns are so large, they require supercomputers to store and process.  Pavlik notes that his goal is to keep Windyty a light-weight, fast-loading app that adds new features only gradually, rather than loading it down with too many options. …To Know Which Way the Wind Blows Another wind visualization, Project Ukko, is a good example of how to display many different variables without overwhelming viewers. Named after the ancient Finnish god of thunder, weather, and the harvest, Project Ukko models and predicts seasonal wind flows around the world. It’s a project of Euporias, a European Union effort to create more economically productive weather prediction tools, and is intended to fill a gap between short-term weather forecasts and the long-term climate outlook. Ukko’s purpose is to show where the wind blows most strongly and reliably at different times of the year. That way, wind energy companies can site their wind farms and make investments more confidently.  The overall goal is to make wind energy a more practical and cost-effective part of a country’s energy generation mix, reducing dependence on polluting fossil fuels, and improving its climate change resilience, according to Ukko’s website. The project’s designer, German data visualization expert Moritz Stefaner, faced the challenge of displaying projections of the wind’s speed, direction, and variability, overlaid with locations and sizes of wind farms around the world (to see if they’re sited in the best wind-harvesting areas). In addition, he needed to communicate how confident those predictions were for a given area. As Stefaner explains in an admirably detailed behind-the-scenes tour, he ended up using line elements that show the predicted wind speed through line thickness and prediction accuracy, compared to decades of historical records, through brightness.  The difference between current and predicted speed is shown through line tilt and color. Note, the lines don’t show the actual direction the winds are heading, unlike the flows in Windyty. The combined brightness, color, and size draw the eye to the areas of greatest change. At any point, you can drill down to the actual weather observations for that location and the predictions generated by Euporias’ models. For those of us who aren’t climate scientists or wind farm owners, the take-away from Project Ukko is how Stefaner and his team went through a series of design prototypes and data interrogations as they transformed abstract data into an informative and aesthetically pleasing visualization. Innovation Tour 2016 Meanwhile, we’re offering some impressive data visualization and analysis capacities in the next release of our software, OpenText Suite 16 and Cloud 16, coming this spring.  If you’re interested in hearing about OpenText’s ability to visualize data and enable the digital world, and you’ll be in Europe this month, we invite you to look into our Innovation Tour, in Munich, Paris, and London this week and Eindhoven in April.  You can: Hear from Mark J. Barrenechea, OpenText CEO and CTO, about the OpenText vision and the future of information management Hear from additional OpenText executives on our products, services and customer success stories Experience the newest OpenText releases with the experts behind them–including how OpenText Suite 16 and Cloud 16 help organizations take advantage of digital disruption to create a better way to work in the digital world Participate in solution-specific breakouts and demonstrations that speak directly to your needs Learn actionable, real-world strategies and best practices employed by OpenText customers to transform their organizations Connect, network, and build your brand with public and private industry leaders For more information on the Innovation Tour or to sign up, click here.   Recent Data Driven Digests: February 29: Red Carpet Edition  http://blogs.opentext.com/red-carpet-edition-data-driven-digest/ February 15: Love is All Around  http://blogs.opentext.com/love-around-data-driven-digest/ February 10: Visualizing Unstructured Content http://blogs.opentext.com/visualizing-unstructured-analysis-data-driven-digest/

Read More

Red Carpet Edition—Data Driven Digest

Film wheel and clapper

The 88th Academy Awards will be given out Sunday, Feb. 28. There’s no lack of sites to analyze the Oscar nominated movies and predict winners. For our part, we’re focusing on the best and most thought-provoking visualizations of the Oscars and film in general.  As you prepare for the red carpet to roll out, searchlights to shine in the skies, and celebrities to pose for the camera, check out these original visualizations. Enjoy! Big Movies, Big Hits Data scientist Seth Kadish of Portland, Ore., trained his graphing powers on the biggest hits of the Oscars – the 85 movies (so far) that were nominated for 10 or more awards. He presented his findings in a handsome variation on the bubble chart, plotting numbers of nominations against Oscars won, and how many films fall into each category.  (Spoiler alert:  However many awards you’re nominated for, you can generally expect to win about half.) As you can see from the chart, “Titanic” is unchallenged as the biggest Academy Award winner to date, with 14 nominations and 11 Oscars won.  You can also see that “The Lord of the Rings: Return of the King” had the largest sweep in Oscars history, winning in all 11 of the categories in which it was nominated. “Ben-Hur” and “West Side Story” had nearly as high a win rate, 11 out of 12 awards and 10 out of 11, respectively. On the downside, “True Grit,” “American Hustle,” and “Gangs of New York” were the biggest losers – all of them got 10 nominations but didn’t win anything. Visualizing Indie Film ROI Seed & Spark, a platform for crowdfunding independent films, teamed up with the information design agency Accurat to create a series of gorgeous 3-D visualizations in the article “Selling at Sundance,” which looked at the return on investment 40 recent indie movies saw at the box office. (The movies in question, pitched from 2011 to 2013, included “Austenland,” “Robot and Frank,” and “The Spectacular Now.”) The correlations themselves are thought-provoking – especially when you realize how few movies sell for more than they cost to make. But even more valuable, in our opinion, is the behind-the-scenes explanations the Accurat team supplied on Behance of how they built these visualizations – “(giving) a shape to otherwise boring numbers.” The Accurat designers (Giorgia Lupi, Simone Quadri, Gabriele Rossi, and Michele Graffieti) wanted to display the correlation between three values: production budget, sale price, and box office gross.  After some experimentation, they decided to represent each movie as a cone-shaped, solid stack of circles, with shading representing budget at the top to sale price at the top; the stack’s height represents the box office take. They dress up their chart with sprinklings of other interesting data, such as the length, setting (historical, modern-day, or sci-fi/fantasy), and number of awards each movie won. This demonstrated that awards didn’t do much to drive box office receipts; even winning an Oscar doesn’t guarantee a profit, Accurat notes. Because the “elevator pitch” – describing the movie’s concept in just a few words, e.g. “It’s like ‘Casablanca’ in a dystopian Martian colony” – is so important, they also created a tag cloud of the 25 most common keywords used on IMDB.com to describe the movies they analyzed. The visualization was published in hard copy in the pilot issue of Bright Ideas Magazine, which was launched at the 2014 Sundance Film Fest. Movie Color Spectrums One of our favorite Oscars is Production Design. It honors the amazing work to create rich, immersive environments that help carry you away to a hobbit-hole, Regency ballroom, 1950s department store, or post-apocalyptic wasteland.  And color palettes are a key part of the creative effect. Dillon Baker, an undergraduate design student at the University of Washington, has come up with an innovative way to see all the colors of a movie.  He created a Java-based program that analyzes each frame of a movie for its average color, then compresses that color into a single vertical line. They get compiled into a timeline that shows the entire work’s range of colors. The effect is mesmerizing. Displayed as a spectrum, the color keys pop out at you – vivid reds, blues, and oranges for “Aladdin,” greenish ‘70s earth tones for “Moonrise Kingdom,” and Art Deco shades of pink and brown for “The Grand Budapest Hotel.”  You can also see scene and tone changes – for example, below you see the dark, earthy hues for Anna and Kristoff’s journey through the wilderness in “Frozen,” contrasted with Elsa’s icy pastels. Baker, who is a year away from his bachelor’s degree, is still coming up with possible applications for his color visualization technology. (Agricultural field surveying?  Peak wildflower prediction? Fashion trend tracking?) Meanwhile, another designer is using a combination of automated color analysis tools and her own aesthetics to extract whole color palettes from a single movie or TV still. Graphic designer Roxy Radulescu comes up with swatches of light, medium, dark, and overall palettes, focusing on a different work each week in her blog Movies in Color.  In an interview, she talks about how color reveals mood, character, and historical era, and guides the viewer’s eye. Which is not far from the principles of good information design! Recent Data Driven Digests: February 15: Love is All Around  http://blogs.opentext.com/love-around-data-driven-digest/ February 10: Visualizing Unstructured Content http://blogs.opentext.com/visualizing-unstructured-analysis-data-driven-digest/ January 29: Are You Ready for Some Football? http://blogs.opentext.com/ready-football-data-driven-digest/    

Read More

HIMSS16: OpenText Prescribes Healthier Patient and Business Outcomes

contact centers

This year’s Health Information and Management Systems Society (HIMSS) Conference is right around the corner.  As a HIMSS North American Emerald Corporate Member, OpenText is proudly participating in the event, taking place February 29 – March 4 in fabulous Las Vegas. This year’s event is shaping up to be a great one.  Not only will OpenText continue to showcase the #1 fax server in the healthcare industry, OpenText RightFax, but joining the conversation is OpenText Analytics, a powerful analytics tools to help make better business decisions and drive better business outcomes. Join us at HIMSS and talk to our industry experts to learn how OpenText is driving greater productivity, efficiency and security in the healthcare industry. With so many reasons to talk to the healthcare experts at OpenText, we’ve narrowed it down to our favorites…. Top 3 Reasons to Visit OpenText at HIMSS Save money. Make the shift to hybrid faxing with RightFax and RightFax Connect: Hybrid faxing combines the power of your on-premises RightFax system with the simplicity of RightFax Connect for cloud-based fax transmission. Learn how to save money and make your RightFax environment even easier to manage. Drive compliance. Better patient information exchange with RightFax Healthcare Direct: RightFax is the only fax server that combines fax and Direct messaging in a single solution with RightFax Healthcare Direct. Learn how you can accelerate your road to interoperability with Direct through this new innovative solution. Make better decisions. Learn about OpenText Analytics products and revolutionize your reporting and analytics infrastructure. This will give you the tools to build the best data-driven enterprise apps. With live data analytics seamlessly incorporated into your apps, you can track, report, and analyze your data in real time. Be sure to visit OpenText at Booth #12113 Hall G, and learn how OpenText will help you save money, increase the security and compliance of patient information exchange, and make better decisions through data analytics.

Read More

Financial Synergy Outlines Integration of OpenText Big Data Analytics at OpenText Innovation Tour 2016

conversion rate optimization

Speaking at today’s OpenText Innovation Tour 2016 event in Sydney, Financial Synergy – Australia’s leading superannuation and investment software provider and innovator – outlined how it is embedding OpenText Big Data Analytics functionality into its Acuity platform. Using OpenText Big Data Analytics, Financial Synergy can provide a predictive and customer-centric environment for wealth management organisations and superannuation funds which relying on the Acuity platform. Stephen Mackley, CEO at Financial Synergy said, “Embedding OpenText Big Data Analytics into Acuity will allow superannuation funds of all sizes and types to affordably discover the hidden value of their data. They will have far greater capacity to retain existing customers and expand potential market share as they will know what has happened, what’s happening and what will happen.” Shankar Sivaprakasam, Vice President, OpenText Analytics (APAC and Japan) said: “One of the greatest challenges for Australia’s wealth management industry is its ability to engage with members, particularly in the early stages of account development. Advanced Big Data analytics is key to understanding the customer, their needs and behaviours. It will provide the ability to interrogate all structured and unstructured data and monitor how to best meet a customer’s goals.” Mackley continued, “We are offering a powerfully flexible tool with which our clients can use strategic, predictive knowledge to create new, efficient business models. It will also enable deeper segmentation, to ‘market of one’ levels of customer service.” Financial Synergy is a leading innovator and provider of digital solutions to the superannuation and investment services industry. The combination of its unique platform capabilities and expertise of in-house software and administration specialists, allow Financial Synergy to transform the member experience and boost business performance. Article Written By Shankar Sivaprakasam, vice president,  Analytics (APAC and Japan), OpenText  

Read More

Love is All Around – Data Driven Digest

shutterstock_49233400

The joy of Valentine’s Day has put romance in the air. Even though love is notoriously hard to quantify and chart, we’ve seen some intriguing visualizations related to that mysterious feeling.  If you have a significant other, draw him or her near, put on some nice music and enjoy these links. Got a Thing for You We’ve talked before about the Internet of Things and the “quantified self” movement made possible by ever smaller, cheaper, and more reliable sensors. One young engineer, Anthony O’Malley, took that a step further by tricking his girlfriend into wearing a heart rate monitor while he proposed to her. The story, as he told it on Reddit, is that he and his girlfriend were hiking in Brazil and he suggested that they should compare their heart rates on steep routes. As shown in the graph he made later, a brisk hike on a warm, steamy day explains his girlfriend’s relatively high baseline pulse, around 100 beats per minute (bpm), while he sat her down and read her a poem about their romantic history. What kicked it into overdrive was when he got down on one knee and proposed; her pulse spiked at about 145 bpm—then leveled off a little to the 125-135 bpm range, as they slow-danced by a waterfall. Finally, once the music ended, the happy couple caught their breath and the heart rate of the now bride-to-be returned to normal.   What makes this chart great is the careful documentation. Pulse is displayed not just at 5-second intervals but as a moving average over 30 seconds (smoothing out some of the randomness), against the mean heart rate of 107 bpm.  O’Malley thoughtfully added explanatory labels for changes in the data, such as “She says SIM!” (yes in Portuguese) and “Song ends.” Now we’re wondering whether this will inspire similar tracker-generated reports, such as giving all the ushers in a wedding party FitBits instead of boutonnieres, or using micro-expressions to check whether you actually liked those shower gifts. Two Households, Both Alike in Dignity One of the most famous love stories in literature, “Romeo and Juliet,” is at heart a story of teenage lovers thwarted by their families’ rivalry. Swiss scholar and designer Martin Grandjean illuminated this aspect of the play by drawing in a series of innovative network diagrams of all Shakespeare’s tragedies.   Each circle represents a character—the larger, the more important—while lines connect characters who are in the same scene together. The “network density” statistic indicates how widely distributed the interactions are; 100% means that each character shares the stage at least once with everybody else in the play. The lowest network density (17%) belongs to Antony and Cleopatra, which features geographically far-flung groups of characters who mostly talk amongst themselves (Cleopatra’s courtiers, Antony’s friends, his ex-wives and competitors back in Rome). By contrast, Othello has the highest network density at 55%; its diagram shows a tight-knit group of colleagues, rivals, and would-be lovers on the Venetian military base at Cyprus trading gossip and threats at practically high-school levels. The diagram of Romeo and Juliet distinctly shows the separate families, Montagues and Capulets. Grandjean’s method also reveals how groups shape the drama, as he writes:  “Trojans and Greeks in Troilus and Cressida, … the Volscians and the Romans in Coriolanus, or the conspirators in Julius Caesar.” Alright, We’ve Got a Winner Whether your Valentine’s Day turns out to be happy or disappointing, there’s surely a pop song to sum up your mood. The Grammy Awards are a showcase for the best — or at least the most popular — songs of the past year in the United States.   The online lyrics library Musixmatch, based in Bologna, Italy, leveraged its terabytes of data and custom algorithms to make their prediction based on all 295 of the past Song of the Year nominees (going back to 1959).  As Musixmatch data scientist Varun Jewalikar and designer Federica Fragapane wrote, they built a predictive analytics model based on a random forest classifier, which ended up ranking all 5 of this year’s nominees from most to least likely to win. Before announcing the predicted winner, Fragapane and Jewalikar made a few observations: Song of the Year winners have been getting wordier, though not necessarily longer. (Most likely due to the increasing popularity of rap and hip-hop genres, where lyrics are more prominent.) They’ve also been getting louder. Lyrics are twice as important as audio.   And they note that a sample set of fewer than 300 songs “is not enough data to build an accurate model and also there are many factors (social impact, popularity, etc.) which haven’t been modeled here. Thus, these predictions should be taken with a very big pinch of salt.” With that said, their prediction… was a bit off but still a great example of visualized data. Recent Data Driven Digests: February 10: Visualizing Unstructured Content January 29: Are You Ready for Some Football? January 19: Crowd-Sourcing the Internet of Things  

Read More

Visualizing Unstructured Analysis — Data Driven Digest

Zika_Featured

As the 2016 Presidential campaigns finish New Hampshire and move on towards “Super Tuesday” on March 1, the candidates and talking heads are still trading accusations about media bias. Which got us thinking about text analysis and ways to visualize unstructured content.  (Not that we’re bragging, but TechCrunch thinks we have an interesting way to measure the tenor of coverage on the candidates…) So this week in the Data Driven Digest, we’re serving up some ingenious visualizations of unstructured data. Enjoy! Unstructured Data Visualization in Action We’ve been busy with our own visualization of unstructured data — namely, all the media coverage of the 2016 Presidential race.  Just in time for the first-in-the-nation Iowa caucuses, OpenText released Election Tracker ‘16, an online tool that lets you monitor, compare, and analyze news coverage of all the candidates. Drawing on OpenText Release 16 (Content Suite and Analytics Suite), Election Tracker ‘16 automatically scans and reads hundreds of major online media publications around the world. This data is analyzed daily to determine sentiment and extract additional information, such as people, places, and topics. It is then translated into visual summaries and embedded into the election app where it can be accessed using interactive dashboards and reports. This kind of content analysis can reveal much more than traditional polling data ─holistic insights into candidates’ approaches and whether their campaign messages are attracting coverage. And although digesting the daily coverage has long been a part of any politician’s day, OpenText Release 16 can do what no human can do: Read, analyze, process, and visualize a billion words a day.  Word Crunching 9 Billion Tweets While we’re tracking language, forensic linguist Jack Grieve of Aston University, Birmingham, England has come up with an “on fleek” (perfect, on point) way to pinpoint how new slang words enter the language: Twitter. Grieve studied a dataset of Tweets in 2013─4 from 7 million users all over America, containing nearly 9 billion words (collected by geography professor Diansheng Guo of the University of South Carolina). After eliminating all the regular, boring words found in the dictionary (so that he’d only be seeing “new” words), Grieve sorted all the remaining words by county, filtered out the rare outliers and obvious mistakes, and looked for the terms that showed the fastest rise in popularity, week over week. These popular newcomers included “baeless” (single/a seemingly perpetual state), “famo” (family and friends), TFW (“that feeling when…” e.g. TFW when a much younger friend has to define the term for you chagrin─ that would be chagrin ), and “rekt” (short for wrecked or destroyed, not “rectitude”). As described in the online magazine Quartz, Grieve found that some new words are popularized by social media microstars or are native to the Internet, like “faved” (to “favorite” a Tweet) or “amirite” (an intentional misspelling of “Am I right?” mocking the assumption that your audience agrees with a given point of view). Grieve’s larger points include the insights you can get from crunching Big Data (9 billion Twitter words!), and social media’s ability to capture language as it’s actually used in real time. “If you’re talking about everyday spoken language, Twitter is going to be closer than a news interview or university lecture,” he told Quartz. Spreading Virally On a more serious subject, unstructured data in the form of news coverage helps track outbreaks of infectious diseases such as the Zika virus. HealthMap.org is a site (and mobile app) created by a team of medical researchers and software developers at Boston Children’s Hospital. They use “online informal sources” to track emerging diseases including flu, the dengue virus, and Zika. Their tracker automatically pulls from a wide range of intelligence sources, including online news stories, eyewitness accounts, official reports, and expert discussions about dangerous infectious diseases.  (In nine languages, including Chinese and Spanish.) Drawing from unstructured data is what differentiates HealthMap.org from other infectious disease trackers, such as the federal Centers for Disease Control and Prevention’s weekly FluView report. The CDC’s FluView provides an admirable range of data, broken out by patients’ age, region, flu strain, comparisons with previous flu seasons, and more. The only problem is that the CDC bases its reports on flu cases reported by hospitals and public health clinics in the U.S. This means the data is both delayed and incomplete (e.g. doesn’t include flu victims who never saw a doctor, or cases not reported to the CDC), limiting its predictive value. By contrast, the HealthMap approach captures a much broader range of data sources. So its reports convey a fuller picture of disease outbreaks, in near-real time, giving doctors and public-health planners (or nervous travelers) better insights into how Zika is likely to spread.  This kind of data visualization is just what the doctor ordered. Recent Data Driven Digests: January 29: Are You Ready for Some Football? January 19: Crowd-Sourcing the Internet of Things January 15: Location Intelligence    

Read More

3 Ways Election Tracker ’16 Solves the Unstructured Data Dilemma

RupaLang-ElectionTracker16-FacebookAdSocialMedia-Facebook-Ad-1200x628

When OpenText launched Election Tracker ’16, we received several encouraging and positive responses regarding how easy it is to compare stats about their favorite Presidential candidate using the interactive visualization and intelligent analysis. And without fail, the next question was, “How does it work and how could it help my business?” Powered by Release 16, Election Tracker is a great example of unstructured data analysis in action. It is able to showcase the power and importance of unstructured data analysis in a relatable way. In fact, we feel the Election Tracker addresses the dilemma of unstructured data in three distinct ways. 1) Intelligent Analysis Making sense of unstructured data is a high concern for digital organizations. Perhaps it’s trying to understand Google’s Page Rank algorithm, finding sentiment in the body of an email or website, or perhaps scanning digital health records for trends. It is also important for businesses that need to organize and govern all data within an enterprise. Companies are not shy about throwing money at the problem. The global business intelligence market saw revenue of nearly $6 billion in 2015. That number is expected to grow to $17 billion at a CAGR of 10.38 percent between now and 2020, according to market research firm Technavio. Much of the investment is expected to come in the form of data analysis and cloud implementation. The secret sauce is our content analytics tool, OpenText InfoFusion. Using natural language processing technology or text mining engine, the software tackles the core of unstructured data by extracting the most relevant linguistic nouns from semi-structured or unstructured textual content. The extraction is based on controlled vocabularies such as names, places, organization labels, product nomenclature, facility locations, employee directories, and even your business jargon. The InfoFusion engine is able to automatically categorize content based on a file plan, hierarchical taxonomy or classification tree. This automatically creates a summary combining the most significant phrases and paragraphs. It can also show related documents.  This ability to relate documents is based on semantics—asking the engine to give you a document that has the same keywords, key phrases, topics and entities. The engine can also detect the ways that key words and phrases are used and correlate them to known indicators of whether a document is dryly factual or conveying emotion about a topic, and whether that emotion is positive or negative ─that is, its sentiment. 2) Interactive Visualization All the data in the world means nothing without some way to visually represent the context. Most pure-play “text analytics” solutions on the market today stop short of actual analysis. They are limited to translating free text to entities and taxonomies, leaving the actual visualization and analysis for the customer to figure out using other technologies. The technology powering the Election Tracker overcomes this important dilemma by converting the data into a visual representation that helps with content analysis. Once the Election Tracker mines raw text from the scores of major news sites around the world, it then uses OpenText Content Analytics to process the content. This determines sentiment and extracts people, places, and topics following standard or custom taxonomies providing the meta-data necessary to conduct an analysis. The tracker determines the objectivity or subjectivity of content and the tone: positive, negative or neutral. Visual summaries of the news data are generated with the Analytics Designer, then housed and deployed on OpenText iHub. The iHub-based visuals are seamlessly embedded into the Election Tracker user interface using the iHub JavaScript API. 3) Scalable and Embeddable While we designed the Election Tracker to automatically crawl the web for election-focused articles, the technology behind the scenes can access and harvest data from any unstructured source. This includes social sites like Twitter, Facebook, and LinkedIn; email; multimedia message service (MMS); document archives like PDFs; RSS feeds; and blogs. Additionally, these sources can be combined with structured data to provide extremely valuable context—such as combining brand social sentiment from Twitter with product launch campaign results from a customer relationship management source, giving unparalleled insight to the success of a launch process. Overcoming the problems of scale can help ease fears about needing to add more data sources in the future. Its ability to be embedded allows companies to use their own branding and serve their customers in a format that is comfortable to the end user. See what all the buzz is about by visiting Election Tracker ’16 at: Electiontracker.us For more on the technology behind the Election Tracker ’16, here is a 20-minute election tracker demo. Also, visit our blog to discover the importance of designing for unstructured analysis of data.  

Read More

OpenText Releases U.S. Presidential Election Application, Powered by OpenText Release 16

Election Tracker

As the highly contested 2016 U.S. Presidential election and the Iowa caucus approach, I’m pleased to announce the release of Election Tracker ‘16, an online application tool for users who want to monitor, compare, and gain insights into the 2016 U.S. Presidential election. For more information, please visit the following site: Electiontracker.us. How does Electiontracker.us work?  Utilizing OpenText Release 16 (Content Suite, and Analytics Suite), Election Tracker ‘16 automatically scans and reads hundreds of top online media publications around the world, capitalizing on the knowledge buried in the unstructured information. This data is analyzed daily to determine sentiment and extract additional information, such as people, places, and topics. It is then translated into visual summaries and embedded into the election app where it can be accessed using interactive dashboards and reports. That is correct, hundreds of websites, a billion words, processed, stored and visualized, in real time. And we have been collecting this data for months to show trends and sentiment changes. Powered by OpenText Release 16, this information-based application provides anyone interested in following the critical 2016 election with deep insights into candidate information, revealing much more than traditional polling data. Using electiontracker.us, election enthusiasts are able to gain a holistic view of how candidates are performing based on media sentiment, which can be a more accurate indication of future success. Election Tracker ‘16 is built using OpenText Release 16, which includes Content Suite (store) and Analytics Suite (visualize and predict), bringing seemingly unstructured data to life. OpenText Release 16 can do what no human can do. Read, analyze, process, and visualize a billion words a day. Transforming Unstructured Data into Interactive Insights All three components of OpenText Release 16 are important, but let me focus on the analytic aspects of Election Tracker ‘16.  As we saw in the 2012 U.S. election, analytics played a major role in the success of the Obama campaign. Drawing from a centralized database of voter information, President Obama’s team was able to leverage analytics to make smarter, more efficient campaign decisions. Everything—from media buy placements to campaign fundraising to voter outreach—was driven by insight into data. This year, analytics promises to play an even bigger role in the U.S. Presidential election. Analytics has made its way into every industry and has become a part of everyday life. Just as candidates look for a competitive advantage to help them win, so too must businesses. Research shows that organizations that are data driven are more profitable and productive than their competitors. Analytics and reporting solutions help organizations become data driven by extracting value from their business data and making it available across the enterprise to facilitate better decision making. Organizations have been using analytics to unlock the power of their business data for years. Marketing analysts are using analytics to evaluate social content and understand customer sentiment. Legal analysts can gain a quick understanding of the context and sentiment of large volumes of legal briefs. Data directors, tasked with organizing and governing enterprise data, are applying analytics to solve complex business problems. But basic analytics has become table stakes, with laggards, smaller companies, and even skeptics jumping on the bandwagon. Embedded analytics is the new competitive advantage. When you scan the most pure-play “text analytics” solutions on the market today, they clearly stop short of actual analysis. They are limited to translating free text to entities and taxonomies, leaving the actual visualization and analysis for organizations to figure out using other technologies. At the other end of the spectrum, traditional dashboard tools lack the sophistication needed to process free text effectively, and they struggle with large amounts of data. With OpenText Release 16, our Analytics Suite accomplishes both with ease, empowering users to quickly and easily build customized enterprise reporting applications to summarize and visualize insights from unstructured big data, securely delivered across web browsers, tablets, and phones. So, whether you’re out to win the race for the U.S. Presidency, gain market share, or attract new customers, an embedded analytics and reporting solution like Election Tracker ’16 can help you cross the finish line first. Throughout the race to November 2016, we’ll be tapping into the power of Election Tracker ’16 to shed light on how candidates perform during key election milestones. Join us for a behind the scenes look at the campaign trail. For more information on the Election Tracker ’16—powered by OpenText read the press release.

Read More