Stannie Holt

Stannie Holt
Stannie Holt is a Marketing Content Writer at OpenText. She has over 20 years' experience as a journalist, market research analyst, and content marketing expert in the fields of enterprise business software, machine learning, e-discovery, and analytics.

MRDM Uses OpenText Analytics to Improve Health Care Outcomes

health care

One of the high-potential use cases for Big Data is to improve health care. Millions of gigabytes of information are generated every day by medical devices, hospitals, pharmacies, specialists, and more. The problem is collecting and sorting through this enormous pool of data to figure out which hospitals, providers, or treatments are the most effective, and putting those insights into the hands of patients, insurers, and other affected parties. Finally, that promise is starting to become reality. A Dutch company, Medical Research Data Management (MRDM), is using OpenText™ Analytics to help the Netherlands’ health care system figure out the most productive and cost-efficient providers and outcomes. The effort to make data collection faster, easier, and more accurate is already paying off. For example, hospitals using MRDM’s OpenText-based analytics and reporting solution for evaluating medical data have been able to reduce complications after colon cancer surgeries by more than half over four years. MRDM chose OpenText Analytics after it realized it needed a robust technical platform that could support more complex, sophisticated medical reporting solutions, and larger volumes of data, than the platform it had been using since it was founded in 2012, open-source BIRT (Business Intelligence Reporting Tool). It rejected many other commercial solutions because they either lacked key functionality or had an inconvenient pricing structure.  (OpenText allows an unlimited number of end users.) The OpenText Analytics components that MRDM is using include a powerful deployment and visualization server that supports a wide range of personalized dashboards with an easy-to-use and intuitive interface. This means MRDM can easily control who sees what. For example, hospitals get reports and visualizations that are refreshed every week with raw data about the outcomes of millions of medical procedures. They can review the findings and pinpoint any inaccurate data before approving them for publication.  Next, MRDM handles release of these reports in customized formats to insurance companies, Dutch government agencies, and patient organizations.  With more detailed information in hand, they can make better decisions leading to better use of limited health care resources. To learn more about this exciting customer success story, including MRDM’s plans to expand throughout Europe and further abroad, click here.

Read More

Find More Knowledge in Your Information at Enterprise World 2017

If your office is like most, it’s got millions of gigabytes full of information stashed away on computer hard drives – and maybe even file cabinets full of paper! Every single business process generates enormous data streams – not just your ERP and CRM systems, but payroll, hiring, even ordering lunch from the caterer for those regular Thursday meetings. So wouldn’t you like to find out how you can leverage the knowledge already contained in all that information? And derive more value from your existing systems of record? Come to OpenText Enterprise World this July and you’ll hear how organizations in every industry are using the cutting-edge techniques of OpenText™ Analytics to derive more value from their data – including self-service access, prediction, and modeling, and innovative techniques to get insights more easily out of unstructured data (aka the stuff you use most of the time: documents, messages, and social media). We are excited to showcase OpenText Magellan at this year’s conference and  show you the impact it will have in helping analyze massive pools of data and harness the power of your information. We’ll also preview the roadmap of new developments in the OpenText Analytics Suite. Helping Our Human Brains Navigate Big Data Thanks to cheap and abundant technology, we have so much data at our disposal – creating up to 2.5 exabytes a day by some estimates – that the sheer amount is overwhelming. In fact, it’s more than our human brains can make sense of.  “It’s difficult to make decisions, because that much data is more than we can make sense of, cognitively,” says Lalith Subramanian, VP of Engineering for Analytics at OpenText. “That’s where machine learning and smart analytics come into the picture,” he explains. “We intend to do for Big Data what earlier reporting software companies tried to do for business intelligence – simplify it and make it less daunting, so that reasonably competent people can do powerful things with Big Data.” Expect plenty of demos and use cases, including a look at our predictions from last year’s Enterprise World about who would die on Season 6 of “Game of Thrones,” and new prognostications for Season 7. Do-It-Yourself Analytics Provisioning Meanwhile, OpenText also plans to unveil enhancements to the Analytics Suite that will help give users even more power to blend and explore their own data. OpenText™ iHub , our enterprise-grade deployment server for interactive analytics at the core of the Analytics Suite, is adding the ability to let non-technical users provision their own data for analysis, rather than relying on IT, Subramanian says. They can freely blend and visualize data from multiple sources. These sources will soon include not just structured data, such as spreadsheets and prepared database files or ERP records, but unstructured data including text documents, web content, and social media streams. That’s because new algorithms to digest and make sense of language and text are getting infused into both OpenText Analytics and OpenText™ InfoFusion, an important component in the content analytics process. With OpenText™ Big Data Analytics, users will be able to apply these new, customized algorithms to the self-provisioned data of many types. At the same time, InfoFusion is adding adapters to pull content off Twitter feeds and web sites automatically. The Word on the Street One use case for this combination of OpenText InfoFusion and the Analytics Suite is to research topics live, as they’re being discussed online, Subramanian adds. “You could set it up so that it goes out as often as desired to see the latest things related to whatever person or topic you’re interested in. Let’s say OpenText Corporation – then it’ll go look for news coverage about OpenText plus the press releases we publish, plus Tweets by and about us, all aggregated together, then analyzed by source, sub-topic, emotional tone (positive, negative, or neutral), as we’ve demonstrated with our content analytics-based Election Tracker. Over time we’d add more and more (external information) sources.” Keep in mind, politicians, pundits, and merchants have been listening to “the word on the street” for generations. But that used to require armies of interns to go through all the mail, voice messages, conversations, or Letters to the Editor – and the net result was score-keeping (“yea” vs. “nay” opinions) or subjective impressions. Now these opinions, like every other aspect of the digital economy, can be recorded and analyzed by software that’s objective and tireless. And they can add up to insights that enrich your business intelligence for better decision-making. To see and hear all of this in person, don’t miss Enterprise World in Toronto, July 10-13. Click here for more information and to register.

Read More

What a Difference a Day Makes: Get up to Speed on OpenText Analytics in 7 Hours

Analytics Workshop

One of the biggest divides in the work world these days is between people with software skills and “business users” – the ones who can work their magic on data and make it tell stories, and… well, everyone else (those folks who often have to go hat in hand to IT, or their department’s digital guru, and ask them to crunch the numbers or build them a report). But that divide is eroding with help from OpenText™ Analytics. With just a few hours’ training, you can go from absolute beginner to creating sophisticated data visualizations and interactive reports that reveal new insights in your data. And if you’re within travel distance of Washington, D.C., have we got an offer for you! Join OpenText Analytics Wednesday, May 10, at The Ritz-Carlton, Arlington, VA for a free one-day interactive, hands-on analytics workshop that dives deep into our enterprise-class tools for designing, deploying, and displaying visually appealing information applications. During this workshop, you’ll gain insights from our technical experts Dan Melcher and Geff Vitale. You’ll learn how OpenText Analytics can provide valuable insights into customers, processes, and operations, improving how you engage and do business. We recently added a bonus session in the afternoon, on embedding secure analytics into your own applications. Here, you’ll see why many companies use OpenText™ iHub to deliver embedded analytics, either to customers (e.g. through a bank’s portal) or as an OEM app vendor, embedding our enterprise-grade analytics on a white-label basis to speed up the development process. Here’s what to expect in each segment: Learning the Basics of OpenText Analytics Suite Get introduced to the functions and use cases of OpenText Analytics Suite, including basic data visualizations and embedded analytics. Start creating your own interactive reports and consider what this ability could do for your own business. Analyze the Customer You’ll learn about the advanced and predictive analysis features of the Analytics Suite by doing a walk-through of a customer analysis scenario. Begin segmenting customer demographics, discovering cross-sell opportunities, and predicting customer behavior, all in minutes – no expertise needed in data science or statistics. Drive Engagement with Dashboards A self-service scenario where you create and share dashboards completely from scratch will introduce the dashboarding and reporting features of OpenText Analytics. See how easy it is to assemble interactive data visualizations that allow users to filter, pivot, explore, and display the information any way they wish. Embed Secure Analytics with iHub After the lunch break, learn how to enable secure analytics in your application, whether as a SaaS or on-premise deployment. OpenText answers the challenge with uncompromising extensibility, scalability, and reliability. Who should attend? IT directors and managers, information technology managers, business analysts, product managers and architects Team members who define, design, and deploy applications that use data visualizations, reports, dashboards, and analytics to engage their audience Consultants who help clients evaluate and implement the right technology to deliver data visualizations, reports, dashboards and analytics at scale If you are modernizing your business with Big Data and want your entire organization to benefit from compelling data visualizations, interactive reports and dashboards – then don’t miss this free, hands-on workshop! For more details or to sign up, click here. And if you’d really like to dive into the many facets of OpenText Analytics, along with Magellan, our next-generation cognitive platform, and the wide world of Enterprise Information Management, don’t miss Enterprise World, July 10-13 in Toronto.  For more information, click here.

Read More

Knorr-Bremse Keeps the Wheels Rolling with Predictive Maintenance Powered by OpenText Analytics

diagnosis

Trains carry billions of passengers and tons of freight a year worldwide, so making sure their brakes work properly is no mere routine maintenance check. Helping rail transport operate more safely and efficiently is top-of-mind for the Knorr-Bremse Group, based in Munich, Germany. The company is a leading manufacturer of brakes and other components of trains, metro cars, and buses. These components include sophisticated programming to optimize operations and diagnosis. The company developed iCOM, an Internet of Things-based platform for automated maintenance and diagnosis.  Through onboard sensors, iCOM (Intelligent Condition Oriented Maintenance) gathers data wirelessly from more than 30 systems throughout a train car, including brakes, doors, wipers, heating and ventilation.  These IoT sensors continually report back conditions such as temperature, pressure, energy generation, duration of use, and error conditions. iCOM analyzes the data to recommend condition-based, rather than static, scheduled maintenance. This means any performance issue can be identified before it becomes a serious safety problem or a more costly repair or replacement. For iCOM customers, this means better safety, more uptime, improved energy efficiency,  and lower operating costs for their rail fleets.   As more customers adopted the solution, they began demanding more sophisticated analysis (to see when, where, and even why an event happens), more visually engaging displays, and the ability to build their own reports without relying on IT. Knorr-Bremse knew it needed to upgrade the technology it was using for analysis and reporting on the vast quantities of data that the iCOM solution gathers, replacing open-source BIRT (Business Intelligence and Reporting Tools). A new analytics platform would also have to be scalable enough to cope with the enormous volumes of real-time data that thousands of sensors across a rail fleet continually generate. Further, Knorr-Bremse needed an analytics solution it could develop, embed into the overall iCOM platform, and bring to market with the least possible time and coding effort. The answer to these challenges  was OpenText™ Analytics Suite. “Due  to the easy-to-use interface of OpenText Analytics, our develop­ers were quickly productive in developing the analytics and report­ing aspects of iCOM. iCOM is based on Java and consequently it has been very easy to integrate and embed the OpenText Analytics platform [into it]. It is not just about shortening the time to develop, though. The results have to look good  and with OpenText, they do,” says Martin Steffens, the iCOM digital platform project manager and software architect at Knorr-Bremse. To learn more about Knorr-Bremse’s success with OpenText Analytics, including a potential drop of up to 20 percent in maintenance costs, click here.

Read More

From KPIs to Smart Slackbots, Hot New Analytics Developments at OpenText Enterprise World 2017

Innovation never sleeps in the OpenText Analytics group, where we’re working hard to put together great presentations for Enterprise World 2017, July 10-13 in Toronto. We offer a sneak peek at product road maps, exciting new features and innovative customer use cases for the OpenText Analytics Suite. Plus, you can get hands-on experience building custom-tailored apps, get tips from experts, immerse yourself in technical details, and network with peers. Learn about: Reporting and dashboards with appealing, easy-to-create visual interfaces Self-service analytics to empower your internal users and customers and help you make better decisions Best-of-breed tools to crunch massive Big Data sets and derive insights you never could have before Cognitive computing and machine learning Capturing the Voice of the Customer Structured and unstructured content analytics that can unlock the hidden value in your documents, chats, and social media feeds. Our presentations include: Industry-focused sessions including OpenText Analytics for Financial Services. Hear how we add value in common use cases within the financial industry, including customer analytics, online consumer banking, and corporate treasury services. Showcases of hot new functions like Creating Intelligent Analytic Bots for Slack (the popular online collaboration tool). Personalized training in OpenText Analytics. Our three-part Hands-On Analytics Workshop can get you from an absolute beginner to competent user, harnessing the power of Big Data for better insights and build compelling data visualizations and interactive reports and dashboards. Technical deep dives with popular tools such as Business Performance Management Analytics. We’ll show you how to use OpenText Analytics to measure KPIs and performance-driven objectives, including the popular Balanced Scorecard methodology. A fascinating use case: Financial Contract Analysis with Perceptiv. See how customers are using our advanced analytics tool to capture, organize, and extract relevance from over 200 fields in half a million financial derivative contracts. How Many Lawyers Does It Take to Analyze an Email Server? Learn how lawyers and investigators are using our cutting-edge OpenText Discovery technology, including email  mapping, concept-based search, and machine learning, to find the “smoking guns” in thousands of pages of email. Click here for more details about attending Enterprise World. See you in Toronto!

Read More

Steel Mill Gains Insight, Makes Better Decisions Through Analytics

analytics

When you think of a steel mill, crucibles of glowing molten metal, giant molds and rollers probably come to mind, not complex financial analysis. But like every other industry nowadays, steel mills – especially ones that specialize in scrap metal recycling – have to keep reviewing their material and production costs and the ever-changing demand for their products, so that they can perform efficiently in a competitive global market. That was the case for North Star BlueScope Steel in Delta, Ohio, which produces hot-rolled steel coils, mostly for the automotive and construction industries. Founded in 1997, the company is the largest scrap steel recycler in Ohio, processing nearly 1.5 million tons of metal a year. To operate profitably, North Star BlueScope examines and analyzes its costs and workflow every month, pulling in data from all over the company, plus external market research. But it was hampered by slow and inefficient technology, centered on Microsoft Excel spreadsheets so large and unwieldy, they took up to 10 minutes just to open. Comparing costs for, say, the period of January through May required North Star staffers to open five separate spreadsheets (one for each month) and combine the information manually. Luckily, the company was already using OpenText™ iHub  as a business intelligence platform for its ERP and asset management systems. It quickly realized iHub would be a much more efficient solution for its monthly costing analysis than the Excel-based manual process. Making Insights Actionable In fact, North Star BlueScope Steel ended up adopting the entire OpenText™ Analytics Suite, including OpenText™ Big Data Analytics (BDA),  whose advanced approach to business intelligence lets it easily access, blend, explore, and analyze data. The results were impressive. The steel company can now analyze a much larger range of its data and get better insights to steer decision-making. For example, it can draw on up to five years’ worth of data in a single, big-picture report, or drill down to a cost-per-minute understanding of mill operations. Now it has a better idea of the grades and mixes of steel products most likely to generate higher profits, and the customers most likely to buy those products. To learn more about how North Star BlueScope Steel is using OpenText Analytics to optimize its operations, plus its plans to embrace the Internet of Things by plugging data streams from its instruments about electricity consumption, material usage, steel prices, and even weather directly into Big Data Analytics, click here.

Read More

Westpac Bank Automates and Speeds Up Regulatory Reporting with OpenText Analytics

Westpac

When Westpac Banking Corporation was founded in 1817 in a small waterfront settlement in Australia, banking was rudimentary. Records were kept with quill pens in leather-bound ledgers: Pounds, shillings, and pence into the cashbox; pounds, shillings, and pence out.  (Until a cashier ran off with half the fledgling bank’s capital in 1821, that is.) Now, exactly 200 years after Westpac’s parent company opened its doors, it’s not only the oldest bank in Australia but the second-largest, with 13 million customers worldwide and over A$812 billion under management. Every year it does more and more business in China, Hong Kong, and other Asia-Pacific nations. The downside to this expansion is: More forms to fill out – managing the electronic and physical flow of cash across national borders is highly regulated, requiring prompt and detailed reports of transactions, delivered in different formats for each country and agency that oversees various aspects of Westpac’s business. These reports require information from multiple sources throughout the company. Until recently, pulling out and consolidating all these complex pieces of data was a manual, slow, labor-intensive process that often generated data errors, according to Craig Chu, Westpac’s CIO for Asia.  The bank knew there had to be a better way to meet its regulatory requirements – but one that wouldn’t create its own new IT burden. A successful proof of concept led to Westpac adopting an information management and reporting solution from OpenText™ Analytics. To hear Chu explain how Westpac streamlined and automated its reporting process with OpenText™ iHub and Big Data Analytics, and all the benefits his company has realized, check out this short video showcasing this success story.  (Spoiler alert: Self-service information access empowers customers and employees.) If you’d like to learn more about what the OpenText Analytics Suite could do for your organization, click here.

Read More

Post-Election Score: Pundits 0, Election Tracker 1

election tracker

In the midst of post-election second-guessing over why so many polls and pundits failed to predict Donald Trump’s win, there was one clear success story: OpenText™ Election Tracker. ElectionTracker, the web app that analyzed news coverage of the Presidential race from over 200 media outlets worldwide for topics and sentiment, was a great showcase for the speed, robustness, and scalability of the OpenText™ Information Hub (iHub) technical platform it was built on. With demands for more than 54,000 graphic visualizations an hour on Election Day, it ramped up quickly with no downtime, performance you’d expect from OpenText™ Analytics. Moreover, the tracker’s value in revealing patterns in the tone and extent of campaign news content provided valuable extra insight into voter concerns that pre-election polls didn’t uncover, and that insight didn’t just end after Election Day. It’s just one in the series of proofs-of-concept on how our unstructured data analytics solutions shine at analyzing text and other unstructured data. They bring to the surface previously hard-to-see patterns in any kind of content stream – social media, customer comments, healthcare service ratings, and much more. OpenText Analytics solutions analyze these patterns and bring them to life in attractive, easy-to-understand, interactive visualizations. Also if some unforeseen event ends up generating millions of unexpected clicks, Tweets, or comments that you need to sift through quickly, iHub offers the power and reliability to handle billions of data points on the fly. Hello, Surprise Visitors! Speaking of unforeseen events: Some of the Election Tracker traffic was due to mistaken identity. On Election Day, so many people were searching online for sites with live tracking of state-by-state election results that electiontracker.us became one of the top results on Google that day. At peak demand, the site was getting nearly 8,000 hits an hour, more than 100 times the usual traffic. Senior Director of Technical Marketing Mark Gamble, an Election Tracker evangelist, was the site administrator that day. “On November 8 at around 6 a.m. I was about to get on a flight when I started getting e-mail alerts from our cloud platform provider that the Election Tracker infrastructure was getting hammered from all those Google searches. I’d resolve that alert, and another one would pop up.” “We had it running at just two nodes of our four-node cluster, to keep day-to-day operating costs down. Our technical team said, ‘Let’s spin up the other two nodes.’  That worked while I was changing planes in Detroit. But when I got off, my phone lit up again: Demand was still climbing. It was just unprecedented traffic.” “So we had our cloud provider double the number of cores, or CPUs, that run on each node. And that kept up with demand. The site took a bit longer to load, but it never once crashed. That’s the advantage of running in the cloud – you can turn up the volume on the fly.” “Of course, the flexibility of our iHub-based platform is unique. All the cloud resources in the world won’t help you if you can’t quickly and efficiently take advantage of them.” Easy Visualizing Demand on the site was heightened by the Election Tracker’s live, interactive interface. That’s intentional, because OpenText Analytics solutions encourage users to take a self-service approach to exploring their data. “It’s not just a series of static pages,” explains Clement Wong, Director of Analytics On-Demand Operations. “The infographics are live and change as the viewer adjusts the parameters.  With each page hit, a visitor was asking for an average of seven visualizations. That means the interface is constantly issuing additional calls back and forth  to the database and the analytic engine. iHub has the robustness to support that.” (In fact, at peak demand the Tracker was creating more than 15 new visualizations every second.)” “Some of the reporters who wrote about Election Tracker told us how much they enjoyed being able to go in and do comparisons on their own,” Gamble says. “For example, look at how much coverage each candidate got over the past 90 days, compared to the last 7 days, then filter for only non-U.S. news sources, or drill down to specific topics like healthcare or foreign policy. That way they didn’t have to look at static figures and then contact us to interpret for them; the application granted the autonomy to draw their own conclusions.” Great Fit for Embedding “The self-service aspect is one reason that iHub and other OpenText Analytics solutions are a great fit for embedding into other web sites (use cases such as bank statements or utility usage)”, Gamble adds. “First of all, an effective embedded analytic application has to be highly interactive and informative, so people want to use it – not just look at ready-made pages, but feel comfortable exploring on their own.” “Embedded analytics also requires seamless integration with the underlying data sources so the visuals are integral and indistinguishable from the rest of the site, and it needs high scalability to keep up with growing usage.” What’s Next? The iHub/InfoFusion integration underlying the Election Tracker is already being used in other proofs-of-concept. One is helping consumer goods manufacturers analyze customers’ social media streams for their sentiments about the product and needs or concerns. “If you think of Election Tracker as the Voice of the Media, the logical next step is Voice of the Customer,” Gamble says. The Election Tracker is headlining the OpenText Innovation Tour, which just wrapped up in Asia and resumes in spring 2017.

Read More

Telco Accessibility 101: What’s Now Covered by U.S. Legislation

telco accessibility

In a word, everything. Name a telecommunications product or service and chances are it has a legal requirement to comply with federal accessibility laws. Let’s see… Mobile connectivity services for smartphones, tablets, and computers? Check Smartphones, tablets, and computers? Check Internet services (e.g., cable, satellite)? Check Television services (e.g., cable, satellite, broadcast)? Check Televisions, radios, DVD/Blu-ray players, DVRs, and on-demand video devices? Check Email, texting, and other text-based communication? Check VoIP communications and online video conferencing? Check Fixed-line phone services? Check Fixed-line telephones, modems, answering machines, and fax machines? Check Two tin cans attached by a string? Check All of these products and services are covered by U.S. accessibility legislation (except the cans and string). What laws are we talking about here? Mainly Section 255 of the Telecommunications Act of 1996, for products and services that existed before 1996, and the Twenty-­First Century Communications and Video Accessibility Act (CVAA) of 2010, which picked up where Section 255 left off, defining accessibility regulations for broadband-enabled advanced communications services. Web accessibility legislation, while not telco-specific, is also relevant. The Americans with Disabilities Act (ADA) doesn’t explicitly define commercial websites as “places of public accommodation” (because the ADA predates the Internet), but the courts have increasingly interpreted the law this way. Therefore, as “places of public accommodation,” company websites—and all associated content –must be accessible to people with disabilities. For more insight on this, try searching on “Netflix ADA Title III” or reading this article. (By the way, a web-focused update of the ADA is in the offing.) Last but not least, we come to Section 508 of the Rehabilitation Act, which spells out accessibility guidelines for businesses wanting to sell electronic and information technology (EIT) to the federal government. If your company doesn’t do that, then Section 508 doesn’t apply to you. What this means for businesses Not unreasonably, telecommunications companies must ensure that their products and services comply with accessibility regulations and are also usable by people with disabilities. This usability requirement means that telecom service providers must offer contracts, bills, and customer support communications in accessible formats. For product manufacturers, usability means providing customers with a full range of relevant learning resources in accessible formats: installation guides, user manuals, and product support communications. To comply with the legislation, telecommunications companies must find and implement cost-effective technology solutions that will allow them to deliver accessible customer-facing content. Organizations that fail to meet federal accessibility standards could leave themselves open to consumer complaints, lawsuits, and, possibly, stiff FCC fines. Meeting the document challenge with accessible PDF Telecommunications companies looking for ways to comply with federal regulations should consider a solution that can transform their existing document output of contracts, bills, manuals, and customer support communications into accessible PDF format. Why PDF? PDF is already the de facto electronic document standard for high-volume customer communications such as service contracts and monthly bills because it’s portable and provides an unchanging snapshot, a necessity for any kind of recordkeeping. But what about HTML? Why not use that? While HTML is ideal for delivering dynamic web and mobile content such as on-demand, customizable summaries of customer account data, it doesn’t produce discrete, time-locked documents. Plus, HTML doesn’t support archiving or portability, meaning HTML files are not “official” documents that can be stored and distributed as fixed entities. Document content is low-hanging fruit Document inaccessibility is not a problem that organizations need to live with because it can be solved immediately — and economically — with OpenText’s Automated Output Accessibility Solution, the only enterprise PDF accessibility solution on the market for high-volume, template-driven documents. This unique software solution enables telecommunications companies to quickly transform service contracts, monthly bills, product guides, and other electronic documents into WCAG 2.0 Level AA-compliant accessible PDFs. Whatever the data source, our performance numbers are measured in milliseconds so customers will receive their content right when they ask for it. OpenText has successfully deployed this solution at government agencies, as well as large commercial organizations, giving them the experience and expertise required to deliver accessible documents within a short time frame, with minimal disruption of day-to-day business. Fast, reliable, compliant, and affordable, our automated solution can help you serve customers and meet your compliance obligations. Learn more about the OpenText™Automated Output Accessibility solution.

Read More

Power Up Your iHub Projects with Free Interactive Viewer Extensions

Interactive Viewer

We’ve updated and are republishing a series of helpful tips for getting the most out of the Interactive Viewer tool for OpenText™ iHub, with free extensions created by our software engineers.  These extensions boost the appearance and functionality of Interactive Viewer, the go-to product for putting personalized iHub content in the hands of all users.  (If you don’t already have iHub installed, click here for a free trial.) Below are links to the full series of six blog posts.  If you have any suggestions for further extensions or other resources, please let us know through the comments below. 1. Extend Interactive Viewer with Row Highlighting A simple jQuery script for highlighting the row the user’s pointer is on. 2. Extend Interactive Viewer with a Pop-Up Dialog Box  Add a fully configurable pop-up dialog box to display details that don’t need to appear in every row. 3. Extend iHub Reports and Dashboards with Font Symbols  Dress up your iHub reports with nifty symbols. 4. Extend iHub Dashboards with Disqus Discussion Boards Encourage conversation the easy way, by embedding a discussion board in the popular Disqus format. 5. Extend iHub Interactive Viewer with Fast Filters  Make column-based filtering easy by using JSAPI to build a Fast Filter – a selectable drop-down menu of distinct values that appears in the header of a column. 6. Extend Interactive Viewer with Table-Wide Search   Filter across multiple columns in an iHub table by creating a table-wide search box.

Read More

Turn Your Big Data into Big Value – Attend our Workshops

big data workshop

The ever-growing digitization of business processes and the rise of Big Data mean workplaces are drowning in information. Data analytics and reporting can help you find useful patterns and trends, or predict performance. The problem is many analytics platforms require expert help in sorting through billions of lines of data. Even full-fledged data scientists spend 50 to 80 percent of their time preparing data, not getting insights from it.  Moreover, there’s a built-in time lag if you need to ask your in-house data scientist to run the numbers when you, a line manager or subject expert, need answers right away. That’s why 95% of organizations want end users to be able to manage and prepare their own data, according to respected market researcher Howard Dresner. Luckily, there’s help. The OpenText™ Analytics Suite combines powerful, richly-featured data analysis with self-service convenience.  You can upload your own data by the terabyte, then access, blend, and explore it quickly, without coding. Then the analysis results can be shared and socialized via the highly-scalable, embedded BI and data visualization platform, which lets you design, deploy, and manage interactive reports and dashboards that can be embedded into any app or device. These reports can address a wide range of business questions from “What are my customers’ spending habits using the credit card from XYZ Bank?” to “Which customers are most likely to respond to our new offer?” Sure, you may be thinking, this sounds great but I want to see the solution in action before I decide. Here’s your chance. On Tuesday, Sept. 13, in San Diego, OpenText is offering a free, hands-on full day workshop on the OpenText Analytics Suite. This 6-hour session (including breakfast and lunch) provides a guided tour of the various modules within the suite and shows you how to build dynamic, interactive, visually compelling information applications. By the end of the workshop, you’ll understand how to: Build interactive, visually rich applications from the ground up Create data visualizations and reports using multiple data sources Embed these visualizations and reports seamlessly into existing apps Target profitable customers and markets using predictive analytics Our visionary speakers offer a day that’s not only informative but entertaining and engaging. Check here for the full schedule of workshops and dates available.

Read More

Unstructured Data Analytics: Replacing ‘I Think’ With ‘We Know’

Anyone who reads our blogs is no doubt familiar with structured data—data that is neatly settled in a database. Row and column headers tell it where to go, the structure opens it to queries, graphic interfaces make it easy to visualize.  You’ve seen the resulting tables of numbers and/or words everywhere from business to government and scientific research. The problem is all the unstructured data, which some research firms estimate could make up between 40 and 80 percent of all data.  This includes emails, voicemails, written documents, PowerPoint presentations, social media feeds, surveys, legal depositions, web pages, video, medical imaging, and other types of content. Unstructured Data, Tell Me Something Unstructured data doesn’t display its underlying patterns easily. Until recently, the only way to get a sense of a big stack of reports or open-ended survey responses was to read through them and hope your intuition picked up on common themes; you couldn’t simply query it. But over the past few years, advances in analytics and content management software have given us more power to interrogate unstructured content. Now OpenText is bringing together powerful processing capacities from across its product lines to create a solution for unstructured data analytics that can give organizations a level of insight into their operations that they might not have imagined before. Replacing Intuition with Analytics The OpenText solution for unstructured data analytics has potential uses in nearly every department or industry. Wherever people are looking intuitively for patterns and trends in unstructured content, our solution can dramatically speed up and scale out their reach.  It can help replace “I feel like we’re seeing a pattern here…” with “The analytics tell us customers love new feature A but they’re finding new feature B really confusing; they wonder why we don’t offer potential feature C.”  Feel more confident in your judgment when the analytics back you up. The Technology Under the Hood This solution draws on OpenText’s deep experience in natural language processing and data visualization.  It’s scalable to handle terabytes of data and millions of users and devices. Open APIs, including JavaScript API (JSAPI) and REST, promote smooth integration with enterprise applications.  And it offers built-in integration with other OpenText solutions for content management, e-discovery, visualization, archiving, and more. Here’s how it works: OpenText accesses and harvests data from any unstructured source, including written documents, spreadsheets, social media, email, PDFs, RSS feeds, CRM applications, and blogs. OpenText InfoFusion retrieves and processes raw data; extracts people, places, and topics; and then determines the overall sentiment. Visual summaries of the processed information are designed, developed, and deployed on OpenText Information Hub (iHub). Visuals are seamlessly embedded into the app using iHub’s JavaScript API. Users enjoy interactive analytic visualizations that allow them to reveal interesting facts and gain unique insights from the unstructured data sources. Below are two common use cases we see for the OpenText solution for unstructured data analytics, but more come up every day, from retail and manufacturing to government and non profits.  If you think of further ways to use it, let us know in the comments below. Use Case 1: On-Demand Web Chat A bank we know told us recently how its customer service team over the past year or two had been making significantly more use of text-based customer support tools—in particular pop-up web chat. This meant the customer service managers were now collecting significantly more “free text” on a wide range of customer support issues including new product inquiries, complaints, and requests for assistance. Reading through millions of lines of text was proving highly time-consuming, but ignoring them was not an option. The bank’s customer service team understood that having the ability to analyze this data would help them spot and understand trends (say, interest in mortgage refinancing) or frequent issues (such as display problems with a mobile interface). Identifying gaps in offerings, common problems, or complaints regarding particular products could help them improve their overall customer experience and stay competitive. Use Case 2: Analysis of Complaints Data Another source of unstructured data is the notes customer service reps take while on the phone with customers. Many CRM systems offer users the ability to type in open-ended comments as an addition to the radio buttons, checklists, and other data structuring features for recording complaints, but they don’t offer built-in functionality to analyze this free-form text.  A number of banking representatives told us they considered this a major gap in their current analytics capabilities. Typically, a bank’s CRM system will offer a “pick list” of already identified problems or topics that customer service reps can choose from, but such lists don’t always provide the level of insight a company needs about what’s making its customers unhappy.  Much of the detail was captured in unstructured free-text fields that they had no easy way to analyze.  If they could quickly identify recurring themes, the banks felt they could be more proactive about addressing problems. Moreover, the banks wanted to analyze the overall emotional tone, or sentiment, of these customer case records and other free-form content sources, such as social media streams. Stand-alone tools for sentiment analysis do exist, but they are generally quite limited in scope or difficult to customize.  They wanted a tool that would easily integrate with their existing CRM system and combine its sentiment analysis with other, internally focused analytics and reporting functions—for example, to track changing consumer sentiment over time against sales or customer-service call volume. A Huge, Beautiful Use Case: Election Tracker ‘16 These are just two of the many use cases for the OpenText solution for unstructured data analytics; we’ll discuss more in future blog posts. You may already be familiar with the first application powered by the solution: the Election Tracker for the 2016 presidential race. The tracker, along with the interesting insights it sifts from thousands of articles about the campaign, has been winning headlines of its own. Expect to hear more about the Election Tracker ’16 as the campaign continues. Meanwhile, if you have ideas on other ways to use our Unstructured Data Analytics solution in your organization, leave them in the comments section.

Read More

Wind and Weather – Data Driven Digest

It’s the beginning of March, traditionally a month of unsettled early-spring weather that can seesaw back and forth between snow and near-tropical warmth, with fog, rain, and windstorms along the way. Suitably for the time of year, the data visualizations we’re sharing with you this week focus on wind and weather. Enjoy! You Don’t Need a Weatherman… Everyone’s familiar with the satellite imagery on the weather segment of your nightly TV news. It’s soothing to watch the wind flows cycle and clouds form and dissipate.  Now an app called Windyty lets you navigate real-time and predictive views of the weather yourself, controlling the area, altitude, and variables such as temperature, air pressure, humidity, clouds, or precipitation.  The effect is downright hypnotic, as well as educational – for example, you can see how much faster the winds blow at higher altitudes or watch fronts pick up moisture over oceans and lakes, then dump it as they hit mountains. Windyty’s creator, Czech programmer Ivo Pavlik, is an avid powder skier, pilot, and kite surfer who wanted a better idea of whether the wind would be right on days he planned to pursue his passions. He leveraged the open-source Project Earth global visualization created by Cameron Beccario (which in its turn draws weather data from NOAA, the National Weather Service, other agencies, and geographic data from the free, open-source Natural Earth mapping initiative). It’s an elegant example of a visualization that focuses on the criteria users want as they query a very large data source. Earth’s weather patterns are so large, they require supercomputers to store and process.  Pavlik notes that his goal is to keep Windyty a light-weight, fast-loading app that adds new features only gradually, rather than loading it down with too many options. …To Know Which Way the Wind Blows Another wind visualization, Project Ukko, is a good example of how to display many different variables without overwhelming viewers. Named after the ancient Finnish god of thunder, weather, and the harvest, Project Ukko models and predicts seasonal wind flows around the world. It’s a project of Euporias, a European Union effort to create more economically productive weather prediction tools, and is intended to fill a gap between short-term weather forecasts and the long-term climate outlook. Ukko’s purpose is to show where the wind blows most strongly and reliably at different times of the year. That way, wind energy companies can site their wind farms and make investments more confidently.  The overall goal is to make wind energy a more practical and cost-effective part of a country’s energy generation mix, reducing dependence on polluting fossil fuels, and improving its climate change resilience, according to Ukko’s website. The project’s designer, German data visualization expert Moritz Stefaner, faced the challenge of displaying projections of the wind’s speed, direction, and variability, overlaid with locations and sizes of wind farms around the world (to see if they’re sited in the best wind-harvesting areas). In addition, he needed to communicate how confident those predictions were for a given area. As Stefaner explains in an admirably detailed behind-the-scenes tour, he ended up using line elements that show the predicted wind speed through line thickness and prediction accuracy, compared to decades of historical records, through brightness.  The difference between current and predicted speed is shown through line tilt and color. Note, the lines don’t show the actual direction the winds are heading, unlike the flows in Windyty. The combined brightness, color, and size draw the eye to the areas of greatest change. At any point, you can drill down to the actual weather observations for that location and the predictions generated by Euporias’ models. For those of us who aren’t climate scientists or wind farm owners, the take-away from Project Ukko is how Stefaner and his team went through a series of design prototypes and data interrogations as they transformed abstract data into an informative and aesthetically pleasing visualization. Innovation Tour 2016 Meanwhile, we’re offering some impressive data visualization and analysis capacities in the next release of our software, OpenText Suite 16 and Cloud 16, coming this spring.  If you’re interested in hearing about OpenText’s ability to visualize data and enable the digital world, and you’ll be in Europe this month, we invite you to look into our Innovation Tour, in Munich, Paris, and London this week and Eindhoven in April.  You can: Hear from Mark J. Barrenechea, OpenText CEO and CTO, about the OpenText vision and the future of information management Hear from additional OpenText executives on our products, services and customer success stories Experience the newest OpenText releases with the experts behind them–including how OpenText Suite 16 and Cloud 16 help organizations take advantage of digital disruption to create a better way to work in the digital world Participate in solution-specific breakouts and demonstrations that speak directly to your needs Learn actionable, real-world strategies and best practices employed by OpenText customers to transform their organizations Connect, network, and build your brand with public and private industry leaders For more information on the Innovation Tour or to sign up, click here.   Recent Data Driven Digests: February 29: Red Carpet Edition  http://blogs.opentext.com/red-carpet-edition-data-driven-digest/ February 15: Love is All Around  http://blogs.opentext.com/love-around-data-driven-digest/ February 10: Visualizing Unstructured Content http://blogs.opentext.com/visualizing-unstructured-analysis-data-driven-digest/

Read More

Red Carpet Edition—Data Driven Digest

Film wheel and clapper

The 88th Academy Awards will be given out Sunday, Feb. 28. There’s no lack of sites to analyze the Oscar nominated movies and predict winners. For our part, we’re focusing on the best and most thought-provoking visualizations of the Oscars and film in general.  As you prepare for the red carpet to roll out, searchlights to shine in the skies, and celebrities to pose for the camera, check out these original visualizations. Enjoy! Big Movies, Big Hits Data scientist Seth Kadish of Portland, Ore., trained his graphing powers on the biggest hits of the Oscars – the 85 movies (so far) that were nominated for 10 or more awards. He presented his findings in a handsome variation on the bubble chart, plotting numbers of nominations against Oscars won, and how many films fall into each category.  (Spoiler alert:  However many awards you’re nominated for, you can generally expect to win about half.) As you can see from the chart, “Titanic” is unchallenged as the biggest Academy Award winner to date, with 14 nominations and 11 Oscars won.  You can also see that “The Lord of the Rings: Return of the King” had the largest sweep in Oscars history, winning in all 11 of the categories in which it was nominated. “Ben-Hur” and “West Side Story” had nearly as high a win rate, 11 out of 12 awards and 10 out of 11, respectively. On the downside, “True Grit,” “American Hustle,” and “Gangs of New York” were the biggest losers – all of them got 10 nominations but didn’t win anything. Visualizing Indie Film ROI Seed & Spark, a platform for crowdfunding independent films, teamed up with the information design agency Accurat to create a series of gorgeous 3-D visualizations in the article “Selling at Sundance,” which looked at the return on investment 40 recent indie movies saw at the box office. (The movies in question, pitched from 2011 to 2013, included “Austenland,” “Robot and Frank,” and “The Spectacular Now.”) The correlations themselves are thought-provoking – especially when you realize how few movies sell for more than they cost to make. But even more valuable, in our opinion, is the behind-the-scenes explanations the Accurat team supplied on Behance of how they built these visualizations – “(giving) a shape to otherwise boring numbers.” The Accurat designers (Giorgia Lupi, Simone Quadri, Gabriele Rossi, and Michele Graffieti) wanted to display the correlation between three values: production budget, sale price, and box office gross.  After some experimentation, they decided to represent each movie as a cone-shaped, solid stack of circles, with shading representing budget at the top to sale price at the top; the stack’s height represents the box office take. They dress up their chart with sprinklings of other interesting data, such as the length, setting (historical, modern-day, or sci-fi/fantasy), and number of awards each movie won. This demonstrated that awards didn’t do much to drive box office receipts; even winning an Oscar doesn’t guarantee a profit, Accurat notes. Because the “elevator pitch” – describing the movie’s concept in just a few words, e.g. “It’s like ‘Casablanca’ in a dystopian Martian colony” – is so important, they also created a tag cloud of the 25 most common keywords used on IMDB.com to describe the movies they analyzed. The visualization was published in hard copy in the pilot issue of Bright Ideas Magazine, which was launched at the 2014 Sundance Film Fest. Movie Color Spectrums One of our favorite Oscars is Production Design. It honors the amazing work to create rich, immersive environments that help carry you away to a hobbit-hole, Regency ballroom, 1950s department store, or post-apocalyptic wasteland.  And color palettes are a key part of the creative effect. Dillon Baker, an undergraduate design student at the University of Washington, has come up with an innovative way to see all the colors of a movie.  He created a Java-based program that analyzes each frame of a movie for its average color, then compresses that color into a single vertical line. They get compiled into a timeline that shows the entire work’s range of colors. The effect is mesmerizing. Displayed as a spectrum, the color keys pop out at you – vivid reds, blues, and oranges for “Aladdin,” greenish ‘70s earth tones for “Moonrise Kingdom,” and Art Deco shades of pink and brown for “The Grand Budapest Hotel.”  You can also see scene and tone changes – for example, below you see the dark, earthy hues for Anna and Kristoff’s journey through the wilderness in “Frozen,” contrasted with Elsa’s icy pastels. Baker, who is a year away from his bachelor’s degree, is still coming up with possible applications for his color visualization technology. (Agricultural field surveying?  Peak wildflower prediction? Fashion trend tracking?) Meanwhile, another designer is using a combination of automated color analysis tools and her own aesthetics to extract whole color palettes from a single movie or TV still. Graphic designer Roxy Radulescu comes up with swatches of light, medium, dark, and overall palettes, focusing on a different work each week in her blog Movies in Color.  In an interview, she talks about how color reveals mood, character, and historical era, and guides the viewer’s eye. Which is not far from the principles of good information design! Recent Data Driven Digests: February 15: Love is All Around  http://blogs.opentext.com/love-around-data-driven-digest/ February 10: Visualizing Unstructured Content http://blogs.opentext.com/visualizing-unstructured-analysis-data-driven-digest/ January 29: Are You Ready for Some Football? http://blogs.opentext.com/ready-football-data-driven-digest/    

Read More

Love is All Around – Data Driven Digest

The joy of Valentine’s Day has put romance in the air. Even though love is notoriously hard to quantify and chart, we’ve seen some intriguing visualizations related to that mysterious feeling.  If you have a significant other, draw him or her near, put on some nice music and enjoy these links. Got a Thing for You We’ve talked before about the Internet of Things and the “quantified self” movement made possible by ever smaller, cheaper, and more reliable sensors. One young engineer, Anthony O’Malley, took that a step further by tricking his girlfriend into wearing a heart rate monitor while he proposed to her. The story, as he told it on Reddit, is that he and his girlfriend were hiking in Brazil and he suggested that they should compare their heart rates on steep routes. As shown in the graph he made later, a brisk hike on a warm, steamy day explains his girlfriend’s relatively high baseline pulse, around 100 beats per minute (bpm), while he sat her down and read her a poem about their romantic history. What kicked it into overdrive was when he got down on one knee and proposed; her pulse spiked at about 145 bpm—then leveled off a little to the 125-135 bpm range, as they slow-danced by a waterfall. Finally, once the music ended, the happy couple caught their breath and the heart rate of the now bride-to-be returned to normal.   What makes this chart great is the careful documentation. Pulse is displayed not just at 5-second intervals but as a moving average over 30 seconds (smoothing out some of the randomness), against the mean heart rate of 107 bpm.  O’Malley thoughtfully added explanatory labels for changes in the data, such as “She says SIM!” (yes in Portuguese) and “Song ends.” Now we’re wondering whether this will inspire similar tracker-generated reports, such as giving all the ushers in a wedding party FitBits instead of boutonnieres, or using micro-expressions to check whether you actually liked those shower gifts. Two Households, Both Alike in Dignity One of the most famous love stories in literature, “Romeo and Juliet,” is at heart a story of teenage lovers thwarted by their families’ rivalry. Swiss scholar and designer Martin Grandjean illuminated this aspect of the play by drawing in a series of innovative network diagrams of all Shakespeare’s tragedies.   Each circle represents a character—the larger, the more important—while lines connect characters who are in the same scene together. The “network density” statistic indicates how widely distributed the interactions are; 100% means that each character shares the stage at least once with everybody else in the play. The lowest network density (17%) belongs to Antony and Cleopatra, which features geographically far-flung groups of characters who mostly talk amongst themselves (Cleopatra’s courtiers, Antony’s friends, his ex-wives and competitors back in Rome). By contrast, Othello has the highest network density at 55%; its diagram shows a tight-knit group of colleagues, rivals, and would-be lovers on the Venetian military base at Cyprus trading gossip and threats at practically high-school levels. The diagram of Romeo and Juliet distinctly shows the separate families, Montagues and Capulets. Grandjean’s method also reveals how groups shape the drama, as he writes:  “Trojans and Greeks in Troilus and Cressida, … the Volscians and the Romans in Coriolanus, or the conspirators in Julius Caesar.” Alright, We’ve Got a Winner Whether your Valentine’s Day turns out to be happy or disappointing, there’s surely a pop song to sum up your mood. The Grammy Awards are a showcase for the best — or at least the most popular — songs of the past year in the United States.   The online lyrics library Musixmatch, based in Bologna, Italy, leveraged its terabytes of data and custom algorithms to make their prediction based on all 295 of the past Song of the Year nominees (going back to 1959).  As Musixmatch data scientist Varun Jewalikar and designer Federica Fragapane wrote, they built a predictive analytics model based on a random forest classifier, which ended up ranking all 5 of this year’s nominees from most to least likely to win. Before announcing the predicted winner, Fragapane and Jewalikar made a few observations: Song of the Year winners have been getting wordier, though not necessarily longer. (Most likely due to the increasing popularity of rap and hip-hop genres, where lyrics are more prominent.) They’ve also been getting louder. Lyrics are twice as important as audio.   And they note that a sample set of fewer than 300 songs “is not enough data to build an accurate model and also there are many factors (social impact, popularity, etc.) which haven’t been modeled here. Thus, these predictions should be taken with a very big pinch of salt.” With that said, their prediction… was a bit off but still a great example of visualized data. Recent Data Driven Digests: February 10: Visualizing Unstructured Content January 29: Are You Ready for Some Football? January 19: Crowd-Sourcing the Internet of Things  

Read More

Visualizing Unstructured Analysis — Data Driven Digest

As the 2016 Presidential campaigns finish New Hampshire and move on towards “Super Tuesday” on March 1, the candidates and talking heads are still trading accusations about media bias. Which got us thinking about text analysis and ways to visualize unstructured content.  (Not that we’re bragging, but TechCrunch thinks we have an interesting way to measure the tenor of coverage on the candidates…) So this week in the Data Driven Digest, we’re serving up some ingenious visualizations of unstructured data. Enjoy! Unstructured Data Visualization in Action We’ve been busy with our own visualization of unstructured data — namely, all the media coverage of the 2016 Presidential race.  Just in time for the first-in-the-nation Iowa caucuses, OpenText released Election Tracker ‘16, an online tool that lets you monitor, compare, and analyze news coverage of all the candidates. Drawing on OpenText Release 16 (Content Suite and Analytics Suite), Election Tracker ‘16 automatically scans and reads hundreds of major online media publications around the world. This data is analyzed daily to determine sentiment and extract additional information, such as people, places, and topics. It is then translated into visual summaries and embedded into the election app where it can be accessed using interactive dashboards and reports. This kind of content analysis can reveal much more than traditional polling data ─holistic insights into candidates’ approaches and whether their campaign messages are attracting coverage. And although digesting the daily coverage has long been a part of any politician’s day, OpenText Release 16 can do what no human can do: Read, analyze, process, and visualize a billion words a day.  Word Crunching 9 Billion Tweets While we’re tracking language, forensic linguist Jack Grieve of Aston University, Birmingham, England has come up with an “on fleek” (perfect, on point) way to pinpoint how new slang words enter the language: Twitter. Grieve studied a dataset of Tweets in 2013─4 from 7 million users all over America, containing nearly 9 billion words (collected by geography professor Diansheng Guo of the University of South Carolina). After eliminating all the regular, boring words found in the dictionary (so that he’d only be seeing “new” words), Grieve sorted all the remaining words by county, filtered out the rare outliers and obvious mistakes, and looked for the terms that showed the fastest rise in popularity, week over week. These popular newcomers included “baeless” (single/a seemingly perpetual state), “famo” (family and friends), TFW (“that feeling when…” e.g. TFW when a much younger friend has to define the term for you chagrin─ that would be chagrin ), and “rekt” (short for wrecked or destroyed, not “rectitude”). As described in the online magazine Quartz, Grieve found that some new words are popularized by social media microstars or are native to the Internet, like “faved” (to “favorite” a Tweet) or “amirite” (an intentional misspelling of “Am I right?” mocking the assumption that your audience agrees with a given point of view). Grieve’s larger points include the insights you can get from crunching Big Data (9 billion Twitter words!), and social media’s ability to capture language as it’s actually used in real time. “If you’re talking about everyday spoken language, Twitter is going to be closer than a news interview or university lecture,” he told Quartz. Spreading Virally On a more serious subject, unstructured data in the form of news coverage helps track outbreaks of infectious diseases such as the Zika virus. HealthMap.org is a site (and mobile app) created by a team of medical researchers and software developers at Boston Children’s Hospital. They use “online informal sources” to track emerging diseases including flu, the dengue virus, and Zika. Their tracker automatically pulls from a wide range of intelligence sources, including online news stories, eyewitness accounts, official reports, and expert discussions about dangerous infectious diseases.  (In nine languages, including Chinese and Spanish.) Drawing from unstructured data is what differentiates HealthMap.org from other infectious disease trackers, such as the federal Centers for Disease Control and Prevention’s weekly FluView report. The CDC’s FluView provides an admirable range of data, broken out by patients’ age, region, flu strain, comparisons with previous flu seasons, and more. The only problem is that the CDC bases its reports on flu cases reported by hospitals and public health clinics in the U.S. This means the data is both delayed and incomplete (e.g. doesn’t include flu victims who never saw a doctor, or cases not reported to the CDC), limiting its predictive value. By contrast, the HealthMap approach captures a much broader range of data sources. So its reports convey a fuller picture of disease outbreaks, in near-real time, giving doctors and public-health planners (or nervous travelers) better insights into how Zika is likely to spread.  This kind of data visualization is just what the doctor ordered. Recent Data Driven Digests: January 29: Are You Ready for Some Football? January 19: Crowd-Sourcing the Internet of Things January 15: Location Intelligence    

Read More

Are You Ready for Some Football? — Data Driven Digest

WFO

Here in Silicon Valley, Super Bowl 50 is not only coming up, it’s downright inescapable. The OpenText offices are positioned halfway between Santa Clara, where the game will actually be played on Feb. 7, and San Francisco, site of the official festivities (44 miles north of the stadium, but who’s counting?).  So in honor of the Big Game, this week we’re throwing on the shoulder pads and tackling some data visualizations related to American football.  Enjoy! Bringing Analytics Into Play In the area of statistics and data visualization, football has always taken a back seat to baseball, the game beloved by generations of bow-tied intellectuals.  But Big Data is changing the practice of everything from medicine to merchandising, so it’s no wonder that better analysis of the numbers is changing the play and appreciation of football. Exhibit A:  Kevin Kelley, head football coach at Pulaski Academy in Little Rock, Ark., has a highly unusual style of play–no punts.  He’s seen the studies from academics such as UC Berkeley professor David Romer, concluding that teams shouldn’t punt when facing fourth downs with less than 4 yards gained, and he came to the conclusion that “field position didn’t matter nearly as much as everyone thought it did.” As Kelley explains in an ESPN short film on the FiveThirtyEight.com hub, if you try to punt when the ball is on your 5-yard line or less, the other team scores 92% of the time. Even 40 yards from your goal line, the other team still scores 77% of the time. “Numbers have shown that what we’re doing is correct,” he says in the film. “There’s no question in my mind, or my coaches’ minds, that we wouldn’t have had the success we’ve had without bringing analytics into (play).” The coach’s data-driven approach has paid off, giving Pulaski multiple winning seasons over the past 12 years, including a 14-0 record in 2015. The highlight of their latest season: Beating Texas football powerhouse Highland Park 40-13 and snapping its 84-game home winning streak, which goes back to 1999. Bigger, Faster, Stronger No doubt most of Coach Kelley’s players dream of turning pro. But they’ll need to bulk up if they want to compete, especially as defensive linemen.  Two data scientists offer vivid demonstrations of how much bigger NFL players have gotten over the past few generations. Software engineer and former astrophysicist Craig M. Booth crunched the data from 2013 NFL rosters to create charts of their heights and weights.  His chart makes it easy to see how various positions sort neatly into clusters:  light, nimble wide receivers and cornerbacks; tall defensive and tight ends; refrigerator-sized tackles and guards. The way Booth mapped the height/weight correlation, with different colors and shapes indicating the various positions, isn’t rocket science. It is, however, a great example of how automation is making data visualization an everyday tool. As he explains on his blog, he didn’t have to manually plot the data points for all 1,700-odd players in the NFL; he downloaded a database of the player measurements from the NFL’s Web site, then used an iPython script to display it. For a historical perspective on how players have gotten bigger since 1950, Booth created a series of line charts showing how players’ weights have skyrocketed relative to their heights. Backfield in Motion Meanwhile, Noah Veltman, a member of the data-driven journalism team at New York City’s public radio station WNYC, has made the bulking-up trend even more vivid by adding a third dimension – time – to his visualization.   His animation draws on NFL player measurements going all the way back to 1920. He observes that football players’ increasing size is partly due to the fact that Americans in general have gotten taller and heavier over time – though partly also due to increasing specialization of body type by position. You can see a wider range of height-and-weight combinations as the years go by.  And from the 1990s on, they begin falling into clusters.  (You could also factor in more weight training, rougher styles of play, and other trends, but we’ll leave that discussion to the true football geeks.) Bars, Lines, and Bubbles Now, what kind of play are we seeing from these bigger, better-trained players? Craig M. Booth recently unveiled an even more interesting football-related project, an interactive visualizer of the performance of every NFL team from 2000 on.  He uses the Google charts API to display data from www.ArmchairAnalysis.com on everything from points scored by team by quarter to total passing or penalty yards. You can customize the visualizer by the teams tracked, which variables appear on the X and Y-axes, whether they’re on a linear or logarithmic scale, and whether to display the data as bubble plots, bar charts, or line graphs. It can serve up all kinds of interesting correlations.  (Even though OpenText offers powerful predictive capacities in our Big Data Analytics suite, we disavow any use of this information to predict the outcome of a certain football game on February 7…) OpenText Named a Leader in the Internet of Things Speaking of sharing data points, OpenText was honored recently in the area of Internet of Things by Dresner Advisory Services, a leading analyst firm in the field of business intelligence, with its first-ever Technical Innovation Awards. You can view an infographic on Dresner’s Wisdom of Crowds research. Recent Data Driven Digests: January 19: Crowd-Sourcing the Internet of Things January 15: Location Intelligence January 5: Life and Expectations  

Read More

Crowd-Sourcing the Internet of Things (Data Driven Digest for January 19, 2016)

Runner with fitness tracker passes the Golden Gate Bridge

The Internet of Things is getting a lot of press these days. The potential use cases are endless, as colleague Noelia Llorente has pointed out: Refrigerators that keep track of the food inside and order more milk or lettuce whenever you’re running low. Mirrors that can determine if you have symptoms of illness and make health recommendations for you. Automated plantations, smart city lighting, autonomous cars that pick you up anywhere in the city… So in this week’s Data Driven Digest, we’re looking at real-world instances of the Internet of Things that do a good job of sharing and visualizing data. As always, we welcome your comments and suggestions for topics in the field of data visualization and analysis.  Enjoy! The Journey of a Million Miles Starts with a Single Step Fitness tracking has long been a popular use for the Internet of Things. Your correspondent was an early adopter, having bought Nike+ running shoes, with a special pocket for a small Internet-enabled sensor, back in 2007.  (Nike+ is now an app using customers’ smartphones, smart watches, and so forth as trackers.) These sensors track where you go and how fast, and your runs can be uploaded and displayed on the Nike+ online forum, along with user-generated commentary – “trash talk” to motivate your running buddies, describing and rating routes, and so forth. Nike is hardly the only online run-sharing provider, but its site is popular enough to have generated years of activity patterns by millions of users worldwide. Here’s one example, a heat map of workouts in the beautiful waterfront parks near San Francisco’s upscale Presidio and Marina neighborhoods.  (You can see which streets are most popular – and, probably, which corners have the best coffeehouses…) The Air That I Breathe Running makes people more aware of the quality of the air that they breathe. HabitatMap.org, an “environmental justice” nonprofit in Brooklyn, N.Y., is trying to make people more conscious of the invisible problem of air pollution through palm-sized sensors called AirBeams. These handheld sensors can measure levels of microparticulate pollution, ozone, carbon monoxide, and nitrogen dioxide (which can be blamed for everything from asthma to heart disease and lung cancer) as well as temperature, humidity, ambient noise, and other conditions. So far so good – buy an AirBeam for $250 and get a personal air-quality meter, whose findings may surprise you. (For example, cooking on a range that doesn’t have an effective air vent subjects you to more grease, soot, and other pollution than the worst smog day in Beijing.) But the Internet of Things is what really makes the device valuable. Just like with the Nike+ activity trackers, AirBeam users upload their sensor data to create collaborative maps of air quality in their neighborhoods. Here, a user demonstrates how his bicycle commute across the Manhattan Bridge subjects him to a lot of truck exhaust and other pollution – a peak of about 80 micrograms of particulate per cubic meter (µg/m3), over twice the Environmental Protection Agency’s 24-hour limit of 35 µg/m3. And here’s a realtime aggregation of hundreds of users’ data about the air quality over Manhattan and Brooklyn.  (Curiously, some of the worst air quality is over the Ozone Park neighborhood…) Clearly, the network effect applies with these and many other crowd-sourced Internet of Things applications – the more data points users are willing to share, the more valuable the overall solution becomes. OpenText Named a Leader in the Internet of Things Speaking of sharing data points, OpenText was honored recently in the area of Internet of Things by Dresner Advisory Services, a leading analyst firm in the field of business intelligence, with its first-ever Technical Innovation Awards. You can view an infographic on Dresner’s Wisdom of Crowds research. Recent Data Driven Digests: January 15: Location Intelligence January 5: Life and Expectations December 22: The Passage of Time in Sun, Stone, and Stars

Read More

Data Driven Digest for January 15: Location, Location, Location

Location intelligence is a trendy term in the business world these days. Basically, it means tracking the whereabouts of things or people, often in real time, and combining that with other information to provide relevant, useful insights. At a consumer level, location intelligence can help with things like finding a coffeehouse open after 9 p.m. or figuring out whether driving via the freeway or city streets will be faster. At a business level, it can help with decisions like where to build a new store branch that doesn’t cannibalize existing customers, or laying out the most efficient delivery truck routes. Location intelligence is particularly on our mind now because OpenText was recently honored by Dresner Advisory Services, a leading analyst firm in the field of business intelligence, with its first-ever Technical Innovation Awards. Dresner recognized our achievements in three areas: Location Intelligence, Internet of Things, and Embedded BI. You’ll be hearing more about these awards later. In the meantime, we’re sharing some great data visualizations based on location intelligence.  As always, we welcome your comments and suggestions.  Enjoy! Take the A Train In cities all over North America, people waiting at bus, train, or trolley stops who are looking at their smartphones aren’t just killing time – they’re finding out exactly when their ride is due to arrive. One of the most popular use cases for location intelligence is real-time transit updates. Scores of transit agencies, from New York and Toronto to Honolulu, have begun tracking the whereabouts of the vehicles in their fleets, and sharing that information in live maps. One of the latest additions is the St. Charles Streetcar line of the New Orleans Regional Transit Authority (NORTA) — actually the oldest continuously operating street railway in the world! (It was created in 1835 as a passenger railway between downtown New Orleans and the Carrollton neighborhood, according to the NORTA Web site.) This is not only a boon to passengers, the location data can also help transit planners figure out where buses are bunching up or falling behind, and adjust schedules accordingly. On the Street Where You Live Crowdsourcing is a popular way to enhance location intelligence. The New York Times offers a great example with this interactive feature describing writers’ and artists’ favorite walks around New York City. You can not only explore the map and associated stories, you can add your own – like this account of a proposal on the Manhattan Bridge. Shelter from the Storm The City of Los Angeles is using location intelligence in a particularly timely way: An interactive map of resources to help residents cope with winter rainstorms (which are expected to be especially bad this year, due to the El Niño weather phenomenon). The city has created a Google Map, embedded in the www.elninola.com site, that shows rainfall severity and any related power outages or flooded streets, along with where residents can find sandbags, hardware stores, or shelter from severe weather, among other things.  It’s accessible via both desktop and smartphones, so users can get directions while they’re driving. (Speaking of directions while driving in the rain, L.A. musician and artist Brad Walsh captured some brilliant footage of an apparently self-driving wheeled trashcan in the Mt. Washington neighborhood. We’re sure it’ll get its own Twitter account any day now.) We share our favorite data-driven observations and visualizations every week here. What topics would you like to read about?  Please leave suggestions and questions in the comment area below. Recent Data Driven Digests: January 5: Life and Expectations December 22: The Passage of Time in Sun, Stone, and Stars December 18: The Data Awakens  

Read More

Data Driven Digest for January 5: Life and Expectations

Welcome to 2016!  Wrapping up 2015 and making resolutions for the year ahead is a good opportunity to consider the passage of time – and in particular, how much is left to each of us. We’re presenting some of the best visualizations of lifespans and life expectancy. So haul that bag of empty champagne bottles and eggnog cartons to the recycling bin, pour yourself a nice glass of kale juice, and enjoy these links for the New Year. “Like Sands Through the Hourglass…” It’s natural to wonder how many years more we’ll live. In fact, it’s an important calculation, when planning for retirement. Figuring out how long a whole population will live is a solvable problem – in fact, statisticians have been forecasting life expectancy for nearly a century. And, the news is generally good:  Life expectancies are going up in nearly every country around the world. But how do you figure out how many years are left to you, personally? (Short of consulting a fortune-teller, a process we don’t recommend as the conclusions are generally not data-driven.) UCLA-trained statistician Nathan Yau of the excellent blog Flowing Data came up with a visualization that looks a bit like a pachinko game. It runs multiple simulations predicting your likely age at death (based on age, gender, and Social Security Administration data) by showing little balls dropping off a slide to hit a range of potential remaining lifespans, everything from “you could die tomorrow” to “you could live to 100.” As the simulations pile up, they peak at the likeliest point. One of the advantages of Yau’s simulator is that it doesn’t provide just one answer, the way many calculators do that ask about your age, gender, race, health habits, and so forth. Instead, it uses the “Monte Carlo” method of multiple randomized trials to get an aggregated answer. Plus, the little rolling, bouncing balls are visually compelling.  (That’s academic-ese for “They’re fun to watch!”) “Visually compelling” is the key.  As flesh-and-blood creatures, we can’t operate entirely in the abstract. It’s one thing to be told you can expect to live X years more; seeing that information as an image somehow has more impact in terms of motivating us to action. That’s why the approach taken by Wait But Why blogger Tim Urban is so striking despite being so simple.  He started with the assumption that we’ll each live to 90 years old – optimistic, but doable. Then he rendered that lifespan as a series of squares, one per year. What makes Urban’s analysis memorable – and a bit chilling – is when he illustrates the remaining years of life as the events in that life – baseball games, trips to the beach, Chinese dumplings, days with aging parents or friends.  Here, he figures that 34 of his 90 expected winter ski trips are already behind him, leaving only 56 to go. Stepping back, he comes to three conclusions: 1) Living in the same place as the people you love matters. I probably have 10X the time left with the people who live in my city as I do with the people who live somewhere else. 2) Priorities matter. Your remaining face time with any person depends largely on where that person falls on your list of life priorities. Make sure this list is set by you—not by unconscious inertia. 3) Quality time matters. If you’re in your last 10% of time with someone you love, keep that fact in the front of your mind when you’re with them and treat that time as what it actually is: precious. Spending Time on Embedded Analytics Since we’re looking ahead to the New Year, on Tuesday, Jan. 12, we’re hosting a webinar featuring TDWI Research Director Fern Halper, discussing Operationalizing and Embedding Analytics for Action. Halper points out that analytics need to be embedded into your systems so they can provide answers right where and when they’re needed. Uses include support for logistics, asset management, customer call centers, and recommendation engines—to name just a few.  Dial in – you’ll learn something! Fern Halper We share our favorite data-driven observations and visualizations every week here.  What topics would you like to read about?  Please leave suggestions and questions in the comment area below. Recent Data Driven Digests: December 22: The Passage of Time in Sun, Stone, and Stars December 18: The Data Awakens December 11: Holiday Lights

Read More