Compliance

Big Data Is Still a Game Changer, but the Game has Changed. Here’s How.

Not long ago, organizations bragged about the large volume of data in their databases. The implied message from IT leaders who boasted about their terabytes and petabytes and exabytes was that company data was like a mountain of gold ore, waiting to be refined. The more ore they had, the more gold – that is, business value – they could get out of it. But the “bigness” of Big Data isn’t the game changer anymore. The real competitive advantage from Big Data lies in two areas: how you use the data, and how you provide access to the data. The way you address both of those goals can make or break an application – and, in some cases, even make or break your entire organization. Allow me to explain why, and tell you what you can do about it – because mastering this important change is vital to enabling the digital world. How Big Data Has Changed Each of us – and the devices we carry, wear, drive, and use every day – generate a surge of data. This information is different from Big Data of just a few years ago, because today’s data is both about us and created by us. Websites, phones, tablets, wearables and even cars are constantly collecting and transmitting data – our vital stats, location, shopping habits, schedules, contacts, you name it. Companies salivate over this smorgasbord of Big Data because they know that harnessing it is key to business success. They want to analyze this data to predict customer behavior and likely outcomes, which should enable them to sell better (and, of course, sell more) to us. That’s the “how you use data” part of the equation – the part that has remained pretty consistent since market research was invented more than 100 years ago, but that has improved greatly (both in speed and precision) with the advent of analytics software. Then comes the “how you provide access to data” part of the equation – the part that highlights how today’s user-generated Big Data is different. Smart, customer-obsessed businesses understand that the data relationship with their consumers is a two-way street. They know that there is tremendous value in providing individuals with direct, secure access to their own data, often through the use of embedded analytics. Put another way: the consumers created the data, and they want it back. Why else do you think financial institutions tout how easily you can check balances and complete transactions on smartphones, and healthcare companies boast about enabling you to check test results and schedule appointments online? Making your data instantly available to you – and only to you – builds trust and loyalty, and deepens the bond between businesses and consumers. And like I said earlier, doing so is vital to enabling the digital world. The New Keys to Success But when a business decides to enable customers to access their data online and explore it with embedded analytics, that business must give top priority to customers’ security and privacy concerns. In a blog post, “Privacy Professor” Rebecca Herold notes that data breaches, anonymization and discrimination rank among the Top 10 Big Data Analytics Privacy Problems. Her post is a must-read for organizations that plan to provide data analytics to customers. To underline Herold’s point, Bank Info Security says that personal data for more than 391.5 million people was compromised in the top six security breach incidents in 2014 – and that number does not include the Sony breach that made headlines. Security and privacy must be a primary consideration for any organization harnessing Big Data analytics. Remember what Uncle Ben said to Peter Parker: “With great power comes great responsibility.” Meeting the privacy and security challenges of today’s user-generated Big Data requires a comprehensive approach that spans the lifecycle of customer data, from generation through distribution. If you want guidance in creating such an approach, check out the replay of a webinar I presented on June 23, Analytics in a Secure World. My colleague Katharina Streater and I discussed: The drivers and trends in the market What top businesses today do to ensure Big Data protection How you can secure data during content generation, access, manipulation and distribution Strategies for complying with data security regulations in any industry If you watch the replay, you’ll come away with great ideas for securing data from the point of access all the way through to deployment and display of analytic results. We explained why a comprehensive approach minimizes the risk of security breaches, while simultaneously providing a personalized data experience for each individual user. We closed the program by explaining how OpenText Analytics and Reporting products have the horsepower required to handle immense volumes of data securely. We showed how the OpenText Analytics platform scales to serve millions of users, and explained why its industrial-strength security can integrate directly into any existing infrastructure. Please check out Analytics in a Secure World today. Privacy Please image by Josh Hallett, via Flickr.

Read More

8 Top Considerations for a Successful Cloud Partnership

As your organization embarks on creating a cloud strategy, there are many things to consider. What are the top benefits you are looking to achieve by moving workloads into the cloud? What areas of your business are you willing to consider as cloud or hybrid cloud implementations? Most of all, as you move into the cloud you are faced with extending your infrastructure and IT team to include your cloud provider. What are the key things you should consider as you determine the key cloud partnerships you will embrace? One Size Does Not Fit All Developing a cloud strategy is an exercise in understanding your business process, workloads, security rules, compliance requirements, and user adoption–just to name a few. Not a simple task, moving business to the cloud might mean a combination of both deployment and costing strategies. Options for on-premises, public cloud, private cloud (both on and off site), Hybrid cloud, SaaS, PaaS, and IaaS all offer organizations flexibility but can also be confusing. What offering is ideal for which business process to generate the highest efficiency and cost benefit? There is no one-size-fits-all solution, which means IT leaders need to be prudent about cloud options and work with their vendors to ensure they get the most value for their investment. Information Matters As we live in the digital age, many organizations recognize the value of information and place significant priority on protecting it. Information Governance, knowing what information you have, where it is and what you need to do with it, has never been more important. When organizations look at moving information to the cloud they need to be extra vigilant to ensure that their information policies and compliance regulations are both understood and upheld by their cloud partner. The value and level of control required over information should play a part in an organization’s decision of what applications and what data will reside in the cloud, on premises or as part of a hybrid cloud implementation.

Read More

Upcoming Accessibility Deadlines for Federal Student Loan Statement Servicers

Section 508 and WCAG compliance has been an important mandate for the U.S. Federal Government, and the Department of Education is one of the government agencies actively working towards meeting these requirements for visually impaired students receiving federal loans. The Department of Education has now issued time frames and deadlines for WCAG compliance to student loan servicers that generate and distribute federal Direct Loan Program (DLP) statements, notices and communications. Accordingly, all loan statements, notices and communications, forms and websites need to be made available to borrowers in accessible read-only HTML format or non-proprietary equivalent (e.g. accessible PDF), complying with Section 508 of the Rehabilitation Act and WCAG 2.0, within a few short months. The Federal Government has also established additional time frames for testing and verification of accessibility compliance before the actual deadline, for making accessible content available to borrowers. Loan service providers typically generate statements, notices and communications in print stream or PDF formats. Making these accessible using traditional methods is manual, laborious, time-consuming and expensive. Visually impaired students therefore typically experience a lag in receiving critical information included in their statements, notices and communications in formats they can access due to the time lines and delays involved in generating manually tagged accessible PDFs, Braille, Large Print, Audio or HTML formats. This is far from ideal, and visually impaired students are now expecting to be treated the same as anyone else would be with regards to the timely availability of important information. Most Federal Student Loan Statement Servicers are still struggling to find a solution that meets compliance and can be made operational before the required deadlines. While building a new solution from scratch is often how IT departments approach technology challenges, given the tight timelines involved and the level of accuracy, expertise, testing capabilities and technological know-how required in building a solution, it is not an efficient way of addressing this particular requirement. Key points for Federal Loan Service Providers to consider when implementing an automated accessibility solution: The solution should be easy to implement and should not require management of multiple formats Storage costs should not increase as a result of the solution The solution should be infinitely scalable and be able to support the demands of generating millions of documents without performance issues The OpenText Automated Output Accessibility solution addresses each of these requirements, and can generate accessible PDFs from any input print stream format quickly and efficiently, while keeping storage size at bay. The solution was designed using thousands of man-hours worth of very specific experience and expertise in the system-generated document accessibility space, and our industry-leading transformation engine enables generating accessible output in the milliseconds. In fact, the output generated from this solution has been reviewed by the National Federation of the Blind as well as the Department of Education. Using best-of-breed technology and accessibility-specific expertise is the only fool-proof way of meeting the tight time frames and deadlines defined by the Department of Education. Learn more about our solution here.

Read More

Replacing Your Legacy Archiving System is a Pain. No More!

Large organizations rely heavily on rapidly evolving technology to thrive in today’s competitive business environment. And one of these vital solutions is the electronic archiving system, which is expected to maintain a comprehensive and accurate record of customer information such as statements, bills, invoices, insurance policies, scanned images and other organizational information that is essential to the survival and growth of the enterprise. It is critically important for modern organizations that these assets are retained in an efficient and intelligent manner so that they can be retrieved on-demand for customer presentation, compliance, auditing, reporting, etc. Like all information technology, archive systems too need to be upgraded from time to time. Depending on the requirements of a progressive organization, this could even mean replacing the existing systems with a brand new solution. The first step toward an effective solution, however, is identifying the shortcomings of the current system in the context of your evolving business needs. Here are a few tell-tale signs that your archiving system hasn’t been keeping up with your growth: Waning vendor support – It doesn’t receive enough attention from the vendor in terms of upgrades and support. Costly Upgrades – When it becomes prohibitively expensive to boost performance or add new capabilities/features. New Media Deficit – The system falls short on receiving and serving up content to the multitude of customer channels, including web, social, mobile, tablet, text, messages, email, and print. Social Disconnect – Perhaps the most easily recognizable symptom of an outdated archive system is the inability to connect with social media such as Facebook and Twitter accounts and capture and store customer information. Content Inaccessibility – Users complaining of an inability to extract data for targeted messaging, trans-promotional marketing, analytics, and other sales and marketing functions. Compliance Infractions – Inability to store or retrieve content that could lead to investigations, fines, license revocations, or lawsuits. If you can relate to one or more of these issues then upgrading to a more contemporary solution may be the best way forward. An example of archive migrations we have conducted for our customers and have extensive experience in is for the Mobius/ASG-ViewDirect® system. The challenges often highlighted for this system include some of those listed in the points above, as well as other issues typically seen in legacy archive systems, such as a lack of a coherent product roadmap, high costs, and an outdated user experience. Customers are often certain about the need for migration but are unsure about how to move to a new archive without disrupting critical business functions. The only real roadblock to improved performance then, is the migration itself. The process can be laborious and cumbersome, with key performance factors around the ability to perform complex document migrations on time and within budget, while maintaining access for existing applications, repurposing information locked in legacy document formats and meeting regulatory requirements. While enterprise IT departments have stringent migration requirements, modernizing your archiving system doesn’t necessarily have to be painful, and OpenText®’s ECM Migration service has a methodology in place to make sure it isn’t. The service provides a way to efficiently migrate content out of legacy archiving systems like Mobius/ASG-ViewDirect® and others to a more contemporary solution such as OpenText’s Output Archive (formerly known as BIRT Repository). Some of the unique benefits of using OpenText’s ECM Migration Service for Mobius migrations include the ability to migrate content out of Mobius without the need to purchase expensive Mobius APIs and the capability to read directly from the underlying file structure using the Mobius Resource Extractor, bypassing the need for Mobius to be running. Our ECM Migration methodology has been designed keeping best practices gleaned from many successful engagements and utilizes award-winning technologies to automate migration in a lights-out environment without disrupting day-to-day business activities. The ECM Migration team has worked with many ECM systems including IBM® Content Management OnDemand (CMOD), IBM® FileNet® Image Services, IBM®FileNet® P8, ASG-ViewDirect®, and others for decades, and the maturity of our solution proves it. Our technology and DETAIL™ Methodology enables us to: Manage all aspects of a migration Cut up to 6 weeks off of the initial planning Use standard logging on a single platform Provide multi-threaded support out-of-the-box Implement process flows, advanced logic and routers through drag-and-drop interfaces without the need for scripting Connect to and pool the connection with multiple databases and repositories Run processes concurrently, by thread or broken down by stage (i.e. Load, Extract, Convert) Handle massive volumes of data, documents, images and metadata So, if you think it’s time to say goodbye to your current archiving system, know that there are experts out there who can help you define your requirements and deploy an appropriate solution that will take you where you want to go. And remember – organizations that evolve, thrive. Others perish.

Read More

Accessible PDF Discussion: Generating Desktop-Level and Enterprise PDFs

To help them achieve complete web accessibility, organizations require a viable technology solution for automatically generating high-volume (i.e., enterprise-level) personalized customer communications in Accessible PDF format. Such a solution would give blind and visually impaired people immediate and equal access to electronic documents, which they currently do not have. This series of blog posts explains why demand is increasing for customer communication documents in Accessible PDF format, describes current industry practices for producing these documents, and introduces a new technology option that enables organizations to keep PDF format as their default electronic format for high-volume, web-delivered documents of record while enabling full accessibility and usability of those PDFs for individuals who are blind or visually impaired. In a recent blog posts, we examined the Drivers behind Web Accessibility and the Best Practices for PDF and Accessible PDF. In this post, we will look at different approaches to generating PDF and Accessible PDF. How PDFs are Generated PDF documents are either created individually at the desktop level by human operators or in large batches at the enterprise level by sophisticated software applications. Desktop Level At the desktop level, PDF documents are manually created by individuals using word processors such as Microsoft Word, graphic design programs such as Adobe Creative Suite, or other software applications. These low-volume ad hoc documents typically include annual reports, newsletters, marketing collateral, training manuals, program support material, and other public-facing documents. PDF versions of scanned hardcopy documents may also be created at the desktop level. Enterprise Level At the enterprise level, PDF documents are automatically created in large volumes by powerful software applications (e.g., document composition engines) supported by enterprise-grade IT infrastructure such as relational databases, high-speed servers, and large-capacity storage devices. These high-volume documents are typically customer communications such as notices, statements, bills, and invoices that are personalized with customer account data for individual recipients. Because they contain confidential information, customers usually access such documents through secure, password-protected web portals. Reasons Why PDFs are Inaccessible by Default At the desktop level, PDF documents are typically produced by converting existing native documents (e.g., Microsoft Office documents, Adobe Creative Suite documents) into PDF format. During the conversion process, the software application attempts to formulate a tag structure based on the contents of the native document. If the author of the native document has not followed accessibility guidelines, explicitly identifying elements, such as headings, and properly formatting lists, tables, and other items, the software simply assigns a tag structure based on its best algorithmic guess about the document, often resulting in errors. While software applications that generate Accessible PDF output are able to faithfully reproduce the appearance of a native document, they cannot infer a logical reading order or produce meaningful alternative text for graphical elements. When alternative text for images is missing from the native document, the PDF conversion engine assigns generic identifiers, e.g. “Image 54”, which are meaningless when read aloud by a screen reader program. Unfortunately, the automated conversion process itself is error-prone so even when native documents have been designed with accessibility in mind, Accessible PDFs require post-conversion inspection and adjustment by a knowledgeable technician to make their contents and tag structure 100% compliant with the specified accessibility standard (e.g., WCAG 2.0, Level AA). For example, during conversion, tables may be incorrectly tagged as graphics, or table data may become dissociated from its corresponding column or row header. Similarly, text headings may not be detected and lists without active and meaningful destinations may only be recognized as plain text. In short, desktop PDF conversion technology is far from perfect. Regardless of the quality of the native source document, PDFs created on the desktop must always be manually remediated and inspected (page by page) to ensure compliance with the accessibility standards required by assistive technologies. Traditionally, this is an expensive, labor-intensive, time-consuming process. Traditional Approaches to Generating Accessible PDFs Desktop Level Existing low-volume, ad hoc PDF documents that need to be made accessible are manually remediated by human operators using specialized desktop software applications such as Adobe Acrobat Professional, CommonLook, Abbyy, and others. To lower costs and achieve faster turnaround times, many organizations contract out manual remediation to specialized third-party service providers that operate more efficiently than smaller in-house teams. To facilitate new document creation and improve quality, organizations develop standard accessible templates for individual document types (e.g., Microsoft Office) and enforce their use by employees, vendors, and contractors. Once converted to PDF, every page of a new document is subjected to automated and manual accessibility/usability testing and remediation to ensure that it conforms to accessibility standards such as WCAG 2.0, Level AA. Enterprise Level At the enterprise level, organizations use powerful software applications to dynamically generate high-volume PDF communications for online presentment to customers. However, manual remediation does not scale for enterprise production of Accessible PDFs. The large number of documents created every month—thousands, millions, tens of millions, or more—precludes manual remediation as a viable option, if only as an accommodation to the percentage of customers requiring the accessible electronic format. Due to sheer volume, manual remediation is cost and time prohibitive. Until recently, there was no automated technology solution either. Organizations simply had no way to produce high volumes of personalized customer communications in Accessible PDF format. That has now changed. In the next blog post, we will take a look at an innovative software system that automatically remediates and creates high volume Accessible PDFs.

Read More

Data Driven Digest for May 1

Next Monday is Star Wars Day, and we’re getting a jumpstart on the holiday with today’s Data Driven Digest. Our first item analyzes the latest teaser trailer for the next installment – yes, there’s data there – and we remain out of this world to admire a video data visualization by an amateur astronomer. We finally plummet back to earth to visualize the value of our personal data. Crash Course: It’s common practice to start with data and create visualizations. When Rhett Allain saw the new teaser trailer for Star Wars: The Force Awakens, he did just the opposite: he observed the land speeder cruising across a desert planet (which, according to news reports, is not Tatooine) and calculated its speed. Published in Wired Science, Allain’s dissection of the scene is a terrific work of video forensics, and includes all of his underlying data. The Force is strong in this one. Location, Location, Location: While we’re out in space, check out the video above by Tony Rice, a software engineer, amateur astronomer, and “solar system ambassador” for NASA. The video visualizes 24 hours of Global Positioning Satellite (GPS) coverage by tracing the path of all of the GPS satellites and showing the overlap of their signals on the surface of the earth. Rice created the video using JSatTrak and orbital data from NORAD. Rice has also created a similar video showing the “A-Train” – a constellation of Earth-observing satellites that travel in a sun-synchronous orbit. No mind tricks here. Personal Cost: Unlike GPS – which is a single system that works worldwide – the web of privacy and security laws and regulations that cover the globe are a madly mixed bag, and so are people’s valuations of their personal data. In an article in the May issue of Harvard Business Review, authors Timothy Morey, Theo Forbath and Allison Schoop surveyed consumers in the United States, China, India, Great Britain, and Germany about how they value web search history, location, health history and nine other types of personal data. Their chart of variances from country to country (a snippet of which is above; click through) is part of an article well worth reading. Like what you see? Every Friday we share favorite examples of data visualization and embedded analytics that have come onto our radar in the past week. 

Read More

How will 3D printers help improve banking KYC?

artificial intelligence

Yes, you just read this title correctly. 3D printers are widely known to offer the potential to be a game changer in the physical supply chain across many sectors and industries. However, the opportunity in Financial Services seems less likely, particularly in any form of real, practical application. Do you agree, or not with my statement here? Well, either way, I suggest you read on to find out more. KYC – Why is it a sensitive subject? Today, every Financial Institution runs their KYC processes themselves, for lots of reasons. At the top of the list of reasons is the reputational and financial risks that remove any appetite to give control of KYC to a 3rd party, such as a shared utility or data service. A bank caught by its regulator servicing the wrong client is usually exposed to millions or billions of dollars in fines. As we’ve seen in the news over the last 3 years, such occurrences fall into the public domain, usually with lasting reputational damages. Approach to KYC nowadays and alternatives The typical KYC process is executed manually, leveraging a combination of paperwork, de-materialisation and archiving. Overall, it is a costly and lengthy process that happens every time a new client comes on-board, when a new signatory is allowed into the relationship. It also needs to be refreshed and verified regularly. KYC processes also delay the “time-to-revenue”; typically the period of time between the agreed contract and the first day of transaction processing. A number of initiatives have been introduced over the last few years, trying to tackle this challenge from several angles. One common method that keeps coming back is to enable KYC to be done once and for all, and shared between all Financial Institutions and Counterparties. This idea of a shared utility, which would enable a client, counterparty, as a business or as an individual, to “passport” its KYC identity across all its financial suppliers. The benefits and advantages seem very compelling and include: reduced costs of processing KYC, reduced time-to-revenue for the Financial Suppliers, less hassle for the clients and counterparties. It’s a win-win for everybody, isn’t it? Why is this nut so tough to crack? Cost reductions and improved client experience benefits look very small when put in perspective with the potential risks and costs associated with non-compliance. We’re talking about millions or billions of dollars in fines, the risk to lose a banking or insurer’s license, even shutting down the business entirely. The incremental gains and advantages of digital and shared KYC do not yet appear to offset these risks. Every few months we read about a government fining a Financial Institution for facilitating illegal activities, a shared utility or international business losing its clients personal information and payment details to hackers. Surely this is not a good industry backdrop to encourage digital KYC! What about 3D printers, what’s the link? As a consumer, I find that home 3D printers are overpriced gadgets with little practical purpose. As a B2B professional working with the largest Supply Chains in the world, the potential opportunity just blows my mind. Analysts agree that most global Supply Chains will be affected, shifting current patterns of commerce and logistics to a complete transformation over the next few decades. The biggest shift will happen around companies focusing on the production of Intellectual Property, delivered in the shape of Digital Assets – such as the files containing the 3D model and assembly specifications for their products. Other companies will focus on the physical production of commercial items, based on those Digital Assets. Analysts agree that most of this world will never be exposed to consumers, just like the world of global logistic is today. The disruption: Digital Assets and Digital Identity If you download music, movies or games regularly, (legally of course) then you probably know about Digital Rights Management (DRM). This early 2000s technology somewhat enabled contents producers and commercial online sharing platforms to ensure you are the only person able to play a track, or rent a movie for a certain period of time. 3D printers bring a new, bigger compelling event for such DRMs, the opportunity to control who can print a product, how many copies, for how long, with verified raw materials and on certified printing equipment. There are typically two facets for this technology: the Digital Asset itself (the 3D design combined with printing requirements and authorised users), and the Digital Identity (the certified, authenticated businesses and users). You see where this is going now… Digital Identity management will spread fast and wide, surfing the 3D printing revolution both for B2B and consumer markets. Digital Assets owners and producers will have an enormous stake and KYC shared utilities will probably continue to experiment and grow over the next couple of years, with more and more “use cases” coming into the frame. I don’t believe that shared utilities for a single industry will gather enough critical mass. Payments and Cash Management itself is already changing, with the introduction of PSD2 rules in Europe, the rise of Blockchain technology and distributed payment ledgers. If we look broadly, banking users (business or consumers) also begin to require a unique Digital Identity for other aspects of their life. Combining innovation with regulation over the next five years is going to be key and the winner will likely manage to combine Digital Identity across several industries and markets, similar to the IT Certificates Authorities (CAs) that spread across all industries since the early 2000s.

Read More

Wearables, Big Data, and Analytics in Healthcare

As wearable technology – including smartwatches, fitness trackers, and even clothing and shoes with integrated sensors – moves into the mainstream, healthcare organizations are exploring ways to use these devices to simplify, transform and accelerate patient-centric care. Their goals include boosting people’s health, improving patient outcomes, streamlining manual processes and opening new avenues for medical research and epidemiology. Analytics, data visualization and reporting are central to those efforts. Transforming Data into Insight and Action Wearables today can monitor and gather wearers’ activity level, heart rate, and other vital signs; reward wearers for healthy activities and habits; and alert the wearer and others, such as doctors, emergency responders and family members when problems arise. “Wearable health technology brings three distinctly beneficial trends to the table – connected information, community, and gamification,” writes Vala Afshar on the Huffington Post. “By harnessing this trifecta, healthcare leaders have new ways to build engagement and create accurate, far-reaching views of both personal and population health.” Wearables are both producers of data (collecting and transmitting wearers’ data) and consumers of data, receiving and displaying information about the wearer’s well-being and progress. Wearables are textbook generators of big data, with high velocity, volume and variety. And as in any big data scenario, transforming that data into insight and action requires a powerful, scalable analytics, data visualization and reporting platform. Wearables in healthcare share many characteristics with the networks of sensors in Internet of Things (IoT) applications. But healthcare adds additional complexities and wrinkles, particularly regarding security. With IoT, everyone agrees that security is important, but the rules and standards vary and are subject to debate. However, when individuals’ personal health data is in the mix, more (and more complicated) laws, security regulations and privacy concerns kick in. “A person’s health information is particularly sensitive,” writes Victoria Horderen in the Chronicle of Data Protection. “[B]oth in a legal sense (because health information is categorized as sensitive under EU data protection law) but also in an obviously everyday sense – people feel that their health information (in most but not all circumstances) is private.” Horderen writes specifically about the EU Data Protection Regulation, but the points she makes apply globally. The takeaway, I think, is that a platform supporting a wearable initiative in healthcare requires a robust, proven security foundation. Many Use Cases With a flexible big data platform supporting wearables, many healthcare use cases arise. Most of these are possible with today’s technology, while others could be on the horizon using future generations of devices. Some use cases include: A person under observation for heart disease can use a wearable to monitor his or her heart rate 24/7, not just while at the doctor’s office. The wearable enables collection of both historical and point-in-time data, and the platform enables in-depth analysis of the data. Alerts presented on a smartwatch can provide customized encouragement for good behavior (such as walking or stair climbing) and positive lifestyle choices (such as getting enough sleep). Such uses are ripe for gamification; if the wearer walks a certain number of steps (customized for the individual) rewards are unlocked. People are more likely to embrace a wearable if it provides an element of fun and positive feedback. Data from large numbers of wearers can be anonymized and aggregated to perform epidemiological studies. Data can be segmented by geography, activity level, and demographics if wearers choose to opt in. A wearable paired with a GPS-enabled smartphone can transmit coordinates and pertinent data to first responders in case of an emergency and alert family members of the wearer’s status. A surgeon wearing smart glasses can monitor patient vital signs and other medical equipment in real time during an operation without turning away from the patient. Think Small, Think Big As these use cases indicate, a platform for wearables in healthcare needs to operate on a micro level, sending customized, personalized alerts, recommendations and actions to individuals based on their own data. But a platform should also enable macro-level analysis of vast quantities of data to spot trends and identify correlations within large populations. The ability to analyze data on a large scale not only holds promise for medical research, but it also improves the wearable’s value to the individual user: An intelligent platform with access to individual and aggregate data can, for example, tell the difference between an heart rate spike due to exercise – a good thing to be encouraged – and a cardiac episode requiring attention and intervention on a case-by-case basis, not just a pre-set threshold. One last bit of good news for healthcare providers who want to embrace wearables: Doctors are more trusted than any other group with consumers’ personal data. According to research by Timothy Morey, Theo Forbath and Allison Schoop and published in the May 2015 issue of Harvard Business Review, 87 percent of consumers find primary care doctors “trustworthy” or “completely trustworthy” with their personal data. That percentage is greater than credit card companies (85 percent), e-commerce firms (80 percent), and consumer electronics firms (77 percent), and much higher than social media firms (56 percent). As wearable use grows, that healthy goodwill is worth building on. Smartwatch image by Robert Scoble

Read More

To Keep or Not to Keep: A Records Management Question

With 90% of the world’s data generated over the last two years and enterprise information growing at an exponential rate, the ability to effectively manage and govern the lifecycle of important electronic business information is more important than ever. Recently, OpenText CMO Adam Howatson sat down with Tracy Caughell, Director of Product Management, to discuss worldwide records management regulations, the consequences of non-compliance, and the cost benefit to organizations who embrace records management solutions. According to Caughell, one of the benefits of OpenText Records Management is the ability to identify when records can be disposed of. “We all know that we need to get rid of stuff,” she says. ” Our strength is allowing [customers] to do it in a way that is defensible, a way that they can prove they did what they were supposed to do, when they were supposed to do it, with minimal disruption to their daily activities.” Watch the video below to learn more, or click here. From capturing, to classifying, to managing information, having an enterprise-wide records management strategy can help organizations to comply with laws, external regulations and internal policies. By providing a structured and transparent way to maintain records from creation through to eventual disposition, Records Management can help to enhance corporate accountability, ensure regulatory compliance, and make it easier to find the information you need. More Information: Read how Sprint uses OpenText Records Management to take control of their records and documents. Learn more about OpenText Records Management solution. photo courtesy of Marcin Wichary

Read More

Information Security in the Digital Age [Podcast]

This is the first of what we hope to be many podcasts in which we explore the technology and culture of Enterprise Information Management (EIM). We’re going to share stories about how OpenText is delivering world class technology and improving our Customer Experience on a daily basis. In this installment, we hope to give you a better understanding of the current cyber security climate, show you what we’re doing to keep your data secure and protect your privacy, and tell you how you can protect yourself online. Our discussion on information security has been recorded as a podcast! If you’d like to listen but don’t see the player above click here. If you don’t want to listen to the podcast, we’ve transcribed it for you below: … The unknown unknown… … If it was three in the morning and there was a bunch of guys standing down a poorly lit alley, would you walk down there by yourself? Probably not. Yet on the Internet, we do that continuously—we walk down that street—and then we’re shocked when negative things happen… … People have an expectation that once they put a lock on their door they’re secure. And that might be the case in their home. But electronically it’s not quite so simple… Are we safe online? Perhaps a better question is whether our information is safe online. 2014 was a banner year for information, data—what we now call cyber—security, and if analyst reports can be any indication, security professionals are on high alert in 2015. International governing bodies have also placed an urgency on better understanding cyber security risks and putting in place strategies to ensure stable telecommunications and safeguard information. There has also been growing concern around data privacy. Though security and privacy work hand-in- hand and it’s difficult to have data privacy without security, there is a difference between the two terms. Security involves the confidentiality, availability and integrity of data. It’s about only collecting information that’s required, then keeping that information safe and destroying it when it’s no longer needed. On the other hand, privacy is about the appropriate use of data. To help us through the topic of cyber security, we talked to Greg Murray, VP of Information Security and Chief Information Security Officer at OpenText. The OpenText security team is made up of specialists around the world who provide operational response, risk assessments and compliance. They also brief executive leadership regularly, and keep development teams abreast of pertinent security information. More importantly, Greg and his team work with our customers to ensure their unique security needs are covered end-to-end. “It starts early in the process,” says Greg. “It starts in the presales cycle where we try to understand the risks that [our customers] are trying to manage in their organization. We find out how they are applying security against that, and then that becomes contractual obligation that we make sure is clearly stated in our agreement with the customer. From there, it goes into our operations center—or risk center, depending on what we’re looking at—and we ensure that whatever our obligations, we’re on top of them and following the different verticals and industries.” Again, 2014 was a big year for cyber security in the news (I think we all remember the stories of not too long ago). But while news agencies focused on the scope and possible future threats, Greg learned something else: “I think if we look at media, one probably would not have argued until last year that media was a high threat area compared to something like aerospace defense. That has changed. Clearly that has changed. As a result, customers come back and say, ‘Hey, our environment has changed. What can you do to help us with that?’” “What a financial institution requires is very different than what a manufacturing provider requires or a pharmaceutical organization. Some of that, as a provider to these organizations and customers, we can carry for them on their behalf. In other cases they must carry it themselves. A lot of the discussions that we have with customers are in regards to ‘Where’s that line?’” “At the end of the day, there’s a collaboration. It’s not all on the customer, it’s not all on OpenText. We have to work together to be able to prove compliance and prove security across the environment.” Regardless of the size, industry or location of an organization, security needs to be a top priority. This concept isn’t a new one. As Greg told Adam Howatson, OpenText CMO in a recent Tech Talk interview, information security hasn’t evolved that much over the last 50 years (view the discussion on YouTube). Greg’s answer may surprise, but after some digging I learned that back in 1998, the Russian Federation brought the issue of information security to the UN’s attention by suggesting that telecommunications were beginning to be used for purposes “inconsistent with the objectives of maintaining international stability and security.” Since then, the UN has been trying to increase transparency, predictability and cooperation among the nations of the world in an effort to police the Internet and private networks. Additionally, if you have seen the Alan Turing biopic The Imitation Game, you know that people have been trying to encrypt and decipher messages since the 1940s and probably even earlier. Today, the lack of physical borders online has certainly complicated things, but the information security game remains the same, and cooperation among allies remains the key. “Are we all contributing together?” Greg asks. “If we’re all working together—just like Neighborhood Watch—we need that same neighborhood community watch on the internet. If you see stuff that doesn’t look right, you should probably report it.” The bad guys are organized and we need to be organized as well. The more we share information and the more we work together… Particularly at OpenText, we have a lot of customer outreach programs and security work where we work hand-in-hand with customer security teams. By doing that, we improve not only our security, but we improve security across the industry.” Recently I attended a talk given by Dr. Ann Cavoukian, former Ontario Privacy Commissioner and Executive Director at the Privacy and Big Data Institute at Ryerson University in Toronto. In it, she said that “privacy cannot be assured solely by compliance with regulatory frameworks; rather, privacy assurance must ideally become an organization’s default mode of operation.” She said that privacy—which again, involves the appropriate use of information—must be at the core of IT systems, accountable business practices and in the physical design and networked infrastructure. Privacy needs to be built into the very design of a business. And I think it’s evident from what Greg says about security, and the way OpenText designs its software with the users’ needs in mind, that our customers’ privacy and security is an essential part of what we offer. “We have a tremendous number of technical controls that are in place throughout all of our systems. For us, though, it starts on the drawing board. That’s when we start thinking about security.” “As soon as Product Management comes up with a new idea, we sit down with them to understand what they’re trying to achieve for the customer and how we’re going to secure it. So that by the time somebody’s uploading that document, it’s already gone through design, engineering, regression testing analysis, security penetration testing.” “One of the other things we do is called threat modelling. Typically we look at the different types of solutions—whether they’re file transfer or transactional, for example—and we look across the industry to see who has been breached and how. We then specifically include that in all of our security and regression testing.” You don’t need to look further than the OpenText Cloud Bill of Rights for proof in our dedication to information security and privacy. In it, we guarantee for our cloud customers the following: You own your content We will not lose your data We will not spy on your data We will not sell your data We will not withhold your data You locate your data where you want it Not everyone is up front with their data privacy policy, but with people becoming more aware of information security and privacy concerns, organizations are going to find themselves facing serious consequences if they do not make the appropriate changes to internal processes and policy. Data security doesn’t lie solely in the hands of cloud vendors or software developers, however. We asked Greg what users and IT administrators can do to protect themselves, and he said it comes down to three things: “One is change your passwords regularly. I know it sounds kind of foolish, but in this day and age if you can use two-factor or multi-factor authentication that does make a big difference.” “The second thing you can do is make sure your systems are patched. 95% of breaches happen because systems aren’t patched. When people ask ‘What’s the sexy side of security?’, it’s not patching. But it works. And it’s not that expensive—it’s typically included free from most vendors.” “The third thing is ‘think before you click.’ If you don’t know who it is or you don’t know what it is… Curiosity kills the cat and curiosity infects computers.” We hope you enjoyed our discussion on information privacy and cyber security. If you’d like to know more about the topics discussed today, visit opencanada.org, privacybydesign.com and of course Opentext.com. We also encourage you to learn more about security regulations and compliance by visiting the CCIRC and FS-ISAC websites.  

Read More

Digital Disruption: The Forces of Data Driven Smart Apps [Part 4]

Editor’s note: Shaku Atre (right at our Data Driven Summit last December) is the founder and managing partner of the Atre Group, Inc. of New York City, NY and Santa Cruz, California. Atre has written a thorough and compelling treatise on the disruptive power of mobile apps, supporting her analysis and conclusions with templates and case studies.  We are privileged to present her analysis here in four parts. In Part 1, she made the case for mobile apps and laid out  some of the forces behind digital disruption. In Part 2 Atre described two more disruptive forces, and in Part 3 she shared two templates for creating mobile app case studies. Today, the series concludes with mobile app case studies in financial services, telecommunications, car rental, and pharmaceutical industries. At the end of this post we share a link to download the entire series.    Case Study 1: Financial Services Let us consider a basic consumer and small business bank as a publisher of a smart app for “Business Advantage Checking with Mobile Banking” Figure 3 Who are the primary beneficiaries? a. The bank’s customers Who are the secondary beneficiaries? a. The banks’ customer base grows with very good referrals. The bank   itself will be the beneficiary.b. Banks make money by using customers’ money in loaning the money at higher rates for loans given out as compared to what the banks pay their customers. c. If a bank has a good amount of money in reserves, the bank can get a better interest rate from the Federal Reserve Bank. d. Telecommunications companies that supply Internet capabilities to the banks e. Credit score companies which keep track of credit worthiness of the individuals f. Government which can put a hold on the accounts if taxes are   not paid g. Mortgage banks; credit card companies; employees: city, state and federal governments with electronic funds transfers from the checking account – any recipients of the EFT Examples of functionalities to be provided by the bank’s App: • The main intent of this app is to have your own bank in your back pocket. • Which devices are used for online banking? (most frequently used devices are as follows – and we will have many more that we can’t even think of): a. iPhone (iOS)b. iPad (iOS) c. Android Phone d. Windows Phone e. Blackberry f. iPad (iOS) g. Android Tablet h. Kindle Fire i. Other Devices • The main functions that are expected of a successful bank are: Login, account authentication, account overview on one screen– easy creation of your own banking dashboard, deposit, bill-pay, transfers, person to person payment, message center, customization, PFM (Personal Finance Management) with possible integration with an accounting software. • Your own banking dashboard addressing needs of different customer segments: Total control of the end-user, customized dashboard with drag and drop capability of widgets, widget catalog, store personal likes and dislikes to make the experience desirable. • Accounts and Transactions: a. Monitoring balances of all types of accounts at the bank: checking, savings, debit and credit cards, mortgage loans,    personal loansb. Transaction details with various filtering options, tools to help categorize and tag. c. Tracking of completed as well as pending transactions, ATM withdrawals and deposits, check deposits, cash deposits, online    deposits, warnings about when the deposited money will be available to withdraw, grouping of accounts, joint accounts and       setting limits how much each person can withdraw at once-        within what timeframe, adding accounts from other external banks • Deposits & Loans: a. Keep track of details of deposits and loansb. Verify on an ongoing basis credit card limits, credit card payments each month, loans and overdrafts, interest rates, c. Prioritize repayment schedules paying the highest interest rates’ loans first, etc. d. New loan application process simplification • Money Transfers & Person To Person Payments: a. Money transfers of multiple types such as P2P (Person to Person) Transfers, as well as Account to Account Transfers (A2A)b. Domestic Transfers and International Transfers c. Scheduled and Recurring payments are supported d. Connection with social address books so that friends can transfer money using email addresses or mobile phones e. Transfers that are pending or scheduled are watched • Bill Pay: a. Vendors should be able to email bills to a secure message centerb. Optical Character Recognition and mobile scanning capabilities for paper invoices for previously verified vendors for quick      payments • Split & Share: a. Receiving invoices together and splitting them by automatic debits of accounts among already declared friendsb. Social address books to be integrated with the banking transactions • Alerts: a. If the balance in the account goes down to a certain amount an alert message is sent to the account holder’s smartphoneb. An alert about a bounced check and charge taken out of the checking account • Which services would you like to search for? a. ATMsb. Banking centers c. 24-Hour ATMs d. Banking centers open Saturdays e. Drive-Up ATMs near my current location Which parts of Big Data can be stored and used?   Figure 4 • Customers’ Data Storage: • Primary Beneficiaries: direct deposits, direct debit, EFTs, account to account & person to person transfers, balance transfer, account management with QuickBooks Integration • Customers’ (Primary & Secondary) Transactional Data: • Customers’ invoices & archived internal data • Potential Customers’ Data Storage: • Data such as referrals received, Credit scores, accounting integration • External Data Storage: • Marketing and Industry big data such as: FIX, SWIFT, competitive data; data such as: Bank Reserves, Troubled Banks, Prime Rate Changes are streamed   Case Study 2: Telecommunications Let us consider a Mobile Telecommunications Service Provider as a publisher of a smart app for Smart Telephone Service Provider   Figure 5 Who are the primary beneficiaries? • Consumers, small and large business owners Who are the secondary beneficiaries? • Telephone manufacturers• Independent telephone service providers • Insurance companies insuring hardware • All types of industries that have mushroomed with mobile equipment and services Examples of functionalities to be provided by the telecom’s App: • View your usage• Purchased Extras • Manage your plan & Extras • View recent transactions • Top Up – Credit Card • Top Up – Prepaid Voucher • Pay your bill • View Activity – For Prepaid • Alerts & Notifications – For Prepaid Novel Apps for Telecom: • Public health: e. g. Ebola Outbreak: • Connect with Toll Free Numbers necessary human resources needed to help save lives at a massive scale. Telecom and Internet are the two most important ingredients. • Interactive voice response currency converter App: • Providing up-to-date exchange rate information • Finding missing children           Which parts of Big Data can be stored and used?   Figure 6 • Customers’ Data Storage: • Customer’s usage, recent transactions, Voice usage, data usage, bill payment, top-up credit card • Customers’ Transactional Data: • Customers’ invoices & archived internal data • Potential Customers’ Data Storage • Telephone manufacturers’ special discounts, Insurance companies’ special offers for lost telephones, special deals • External Data Storage • Telephones’ sales statistics, regulatory commissions data, marketing campaigns by various telephone service companies, FTC rules and regulations Case Study 3: Car Rental Let us consider a car rental company as a publisher of a smart app for “Automobile Traffic Management App – ATMA” Figure 7 Primary Beneficiaries of the App: Drivers, accompanying passengers Secondary beneficiaries of the App and what are their benefits? 1. Police Department • Severity of the accidents’ info and any actions necessary based on that info 2. Hospitals • Which types of ambulances should be sent and how big should they be?• Which specialty of physicians should be ready to help the injured people? • Which equipment should be kept ready? 3. Property & Casualty Insurance Companies • Which roads are hazardous and cause property damage?• Which drivers are “high risk”? Examples of functionalities to be provided by ATMA: • ŸTraffic Maps to show the travel route taken by the driver (a visualization)• Alerts (Deployment of mobile technology – Speaking App – the driver can set the timer about how often the app should be speaking, e.g. every five minutes or every ten minutes) • Traffic Congestion: i. Maps – a visualization that integrates a map with easy to understand visual icons such as bumper to bumper traffic, ambulances, etc. ii. What type of interaction could be provided to avoid hazardous situations iii. The possible reasons for the backup 1. Construction 2. Accident 3. A specific event 4. Inclement weather iv. How long is the backup as far as the time is concernedv. What is the average speedvi. How long is the expected delay to reach the destination vii. What is the average speed? viii. Should someone be informed about the delay? (This information could be set up before starting the journey. If the delay is longer than fifteen minutes the consumer should be informed with a text message.) Which parts of Big Data can be stored and used? Figure 8 • Customers’ Data Storage: Driver’s information about driving records, DMV records, car information, starting and ending locations, etc. • Customers’ Transactional Data: Customers’ invoices & archived internal data • Potential Customers’ Data: Police reports of various accidents, hospitals’ reports, insurance claims, state & county roads renewal plans, new construction plans • External Data Storage: State Highway Patrol Data, Road Sensors Data, Maps, Construction data updates, previous accidents data in each part of the traffic area Which parts of Big Data could be used? i. State Highway Patrol Dataii. Road Sensors for accurate readings iii. Maps should be zoomable, clickable and should provide accurate speeds for each exit along the highway. iv. Drivers should be able to report by “speaking” in the car, keeping both hands on the steering wheel, about any incidents on the roads and that “voice data” could be a part of the “Big Data” for traffic information What are novel ways for decision making by drivers? v. Getting alerts to save timeo By driving routes with less traffic vi. Avoiding hazardous situations vii. Recording the problem areas in the database stored in the automobile’s database memory and evaluating the database records before starting any trip which is going to be longer than an hour viii. The app informing the parties at the destinations so that they know that the driver is delayed because of such and such ix. If a restaurant lunch or dinner is set at a certain time requesting scheduled time of reservation and estimated time of delay and another 15 minutes x. Police reports of various accidents, hospitals’ reports, insurance claims, state & county roads renewal plans, new construction plans Case Study 4: Pharma Let us consider a pharmaceutical company as a publisher of an app and people with ailments as primary customers and pharmacies, Physicians, Hospitals, Clinics, Medical Insurance Companies, Medicare, and Medicaid as secondary customers. Figure 9 Examples of functionalities provided by the Pharma Apps: • Diary based apps: • Assisting the patients with the day, the time, and the dose taken or to be taken. Medication passport (Astra Zeneca) with names of the medications, doses and timings of the drugs.• Glucose monitoring apps for patients afflicted by diabetes. • Helping patients to track test results and appointments. (Eli Lilly – MyNet Manager) • Contraceptive reminder My iPill by Bayer • Procedures: • Showing how to administer certain procedures e.g. self-administered insulin injection to the diabetes patients. (Eli Lilly) • Educational: • Foods that reduce the risk of a diabetes on one side and the ones that exasperate on the other side (Boehringer Ingelheim’s Complications combat) • Alerts: • Sending alerts to family members when someone doesn’t take their medication– ad produces charts showing adherence to treatment regimens. (Johnson & Johnson’s subsidiary – Janssen – about half of the patients miss medications) • Weight Loss: Keeping track of the weight, food intake (Noom Weight Loss Coach) Which parts of Big Data can be stored and used? < Figure 10 • Primary Customers’ (Patients’) Data Storage: • Patients report which drugs they take, related improvement in ailments, Undesired reactions experienced by the patients, severity of the reactions • Customers’ (Primary & Secondary) Transactional Data: • Customers’ invoices & archived internal data • Potential Customers’ Data Storage: • Pharmacies record sales of drugs, the most frequently sold to the least frequently sold, which physicians recommend which drugs • External Data Storage: Data from National Health Services, Global Registry of Coronary Events (GRACE), Center for Disease Control (CDC) Prepare and act to handle the Digital Disruption which is rumbling around the corner.

Read More

Health Care Organizations’ Email Security Isn’t Making the Grade

So, it looks like a lot of health care organizations are flunking email security. According to the “state of email trust” survey cited in a recent Fortune Magazine article , health care organizations “severely lag” when it comes to securing email communications. In fact, the article states that an email, “purportedly sent from a typical health insurance company is, for instance, four times likelier to be fraudulent than an email that claims to be from a social media company.” A spokesperson from the surveying organization went on to state that “The poor folks in health care have traditionally not had much digital interaction. They’re the ones furthest behind by a country mile.” Considering the strict security compliance regulations in the space, this is disconcerting for the health care industry. The article went on to explain that only one of the 13 health care companies surveyed surpassed the ‘vulnerable’ category when it came to implementing three standard secure email protocols: Sender Policy Framework, or SPF, which checks emails against a list of authorized senders DomainKeys Identified Mail, or DKIM, which verifies the authenticity of a sender through encrypted digital signatures   Domain-based Message Authentication, Reporting, and Conformance, or DMARC, which checks emails against a published record on a company’s servers, notifies the company of any potentially spoofed emails, and rejects suspicious emails as spam Fortunately, solutions like OpenText Secure Mail support these security protocols while tracking, encrypting and controlling the distribution of your secure email messages. One key feature of Secure Mail that might help some of these “vulnerable” health care companies is Data Leak Prevention (DLP). This capability limits access and transmission of sensitive information based on specific security policies. Features like this position Secure Mail as a strategic business tool for organizations looking to maintain the confidentiality of protected health care information.

Read More

Healthcare Data Breach Hits Top Insurer

It looks like the report was accurate. I recently blogged about a Healthcare Informatics article entitled “ Report: Healthcare Data Breaches Expected to Increase in 2015 ”. The article discussed a report stating how Personal Healthcare Information’s (PHI) continued shift towards digital formats will heighten exposure to data breaches. Unfortunately the report might be right; according to an article in Health Data Management health insurer Premera Blue Cross was just hit by a “sophisticated cyberattack.” Premera said hackers may have accessed vital insurer member and applicant information such as names, dates of birth, email addresses, social security numbers and bank account information. Premera is working feverishly to address this “giant hack” however possible. It goes without saying data security is an extremely important compliance issue for the healthcare sector (HIPAA, anyone?). This news only amplifies the fact healthcare organizations must consider successful implementation, as well as consistent assessment, of electronic data security policies a non-negotiable. Depending on the organization, PHI is shared in various ways – regardless of whether it’s by fax, email, or managed file transfer. Each tend to play key roles in the exchange of PHI. In most cases these modes of electronic data transmission have security features like message encryption in-transit and at-rest, Data Leak Prevention (DLP), specialized viewing privileges and much more – all of which drive the protection, integrity and security of electronic PHI. If anything, Premera’s experience should prompt healthcare organizations to vigilantly re-evaluate the quality of their security measures for protecting electronic PHI. To learn more about how OpenText Information Exchange products can help, please click here .

Read More

How B2B Integration Drives Superior Supply Chain Performance

Today’s manufacturers face a constant challenge of balancing supply chain efficiency with the investment placed in their B2B integration platform. To try and get a better understanding of whether increased use of B2B solutions and services impacts the performance of a supply chain, OpenText sponsored a new B2B integration related study with IDC Manufacturing Insights. This blog will briefly summarise some of the key findings from the study. IDC conducted a one hour qualitative survey with 270 global manufacturers across the automotive, high tech and consumer product goods sub-sectors. We had representation from eight countries including Brazil, China, France, Germany, Japan, South Korea, UK and North America. In order to try and develop the hypothesis, IDC asked a number of questions about current B2B implementation initiatives across the 270 companies and they also asked questions relating to key supply chain metrics across each company. I spent a few months working with IDC on this study, so let me just highlight some of the B2B responses first. The first question looked at the key business initiatives that companies were embarking on over the next three years and international expansion into new markets was the key project as shown by the chart below. It is interesting to note that while many companies are trying to improve supply chain visibility and improve supply chain responsiveness they were not as high up in the chart as international expansion, develop more services and reduce operational costs. Indeed diversification into new sub-sectors is a key activity for many manufacturers today, for example high-tech companies exploring new opportunities in the growing electric vehicle market. In order to try and understand how pervasive B2B technologies were across the companies surveyed, the next question asked about the volume of electronic transactions that were being conducted today. Given the consumer driven, fast moving nature of the automotive and high tech sectors, I guess it is no surprise that it is these two industries that are exchanging transactions electronically with more than 75% of their trading partners. CPG on the other hand has a relatively low level, probably due to the fact that many CPG goods are manufactured in countries such as India and China where the use of B2B tools is relatively low when compared to other manufacturing hubs around the world. The study found there were a number of business drivers for companies needing to improve their B2B environment over the next three years. According to leading analysts, the manufacturing sector is going to be the fastest growing adopter of new Governance, Risk and Compliance (GRC) regulations. This was confirmed by the responses to our study which said that increased regulatory compliance was the number one reason why companies were increasing investment in their B2B infrastructure. This was closely followed by an increasing pressure from customers to adopt B2B integration processes. The survey showed that there was a marked shift in terms of the key barriers to adopting new B2B services. One of the main barriers in the past was getting top level management buy in that B2B integration could bring significant benefits to the business. Our study showed that this barrier was the least likely to prevent a new B2B project from starting. In fact the number one barrier to increased B2B adoption was competing IT projects such as ERP. ERP is typically the number one focus area for CIOs and as such tend to get the most budget and resources to deploy. ERP systems typically have to be live by a specific date and if the date slips then IT resources from other projects are pulled in as required. This could leave other IT projects such as a B2B on-boarding project severely exposed. Even when companies have deployed an ERP and B2B environment, our study showed that nearly 40% of companies had still not integrated their ERP and B2B platforms together. Here at OpenText we find ERP B2B integration projects as a key driver for companies adopting our B2B Managed Services environment. In terms of the benefits gained from B2B integration, companies cited lower inventories as the main benefit. This was most apparent from nearly 60% of automotive respondents who have invested heavily in recent years following the last economic downturn and to help support their global expansion initiatives. As I highlighted at the beginning of this blog post, the study was truly global in nature, covering all the major manufacturing hubs around the world and I just wanted to briefly highlight some of the key findings by region: 71% of German companies trade electronically with less than 50% of their trading partners 80% of Japanese companies said that inventory reduction was a key benefit of B2B integration 62% of US companies trading electronically with more than 50% of their trading partners 27% of Chinese companies trading electronically with more than 50% of their trading partners 57% of South Korean companies said that supply chain complexity was a key barrier to B2B adoption One of the major goals of the study was to find out how companies were progressing in their understanding of how modern B2B technologies can help drive superior business results. To achieve this, it was important to get an understanding of the perceived performance of specific supply chain activities. Once these supply chain metrics were analysed it would then be possible to see if there was any correlation between supply chain performance and the impact of B2B technologies. Here are some examples of the metrics that were measured as part of the analysis: 50% of US companies can process an invoice in under one hour 73% of Chinese companies have an average time to market of less than 120 days 90% of Brazilian companies perform up to two inventory turns per month 87% of Chinese companies deliver greater than 95% perfect orders 60% of Japanese companies have an average customer order delivery time of less than 7 days Overall, there were some interesting findings from a supply chain metrics point of view and I will write a separate blog that examines some of these results. But in the meantime I just wanted to include one chart relating to a specific business process that is seeing increasing levels of digitisation, namely invoicing. The chart below highlights the time it takes for the surveyed companies to process an invoice. The real-time numbers shown below would indicate companies that have adopted electronic invoicing solutions. Acknowledging that the supply chain metrics would be different for each industry, average metrics were created for each industry and IDC then identified ‘top performer’ companies for each metric, ie companies with a performance that significantly exceeds industry average. Building upon this analysis, four ‘performance groups’ were defined according to the amount of times each company was over performing their industry average. Leaders – Companies that are “top performers” in 4 or more metrics Experts – Companies that are “top performers” in 2 or 3 metrics Beginners – Companies that are “top performers” in just one metric Laggards – Companies that are never “top performers” Now I could just provide the final chart that shows the correlation between B2B integration and these four performance groups, however to get a better understanding of this study and the responses we got from these 270 global manufacturers, I would actively encourage you to download a copy of the study, which is available to download FROM HERE. IDC drew a number of conclusions from the results of the study and the complete list of recommendations are available by downloading the study, however some key points include: Start from Business Integration to Achieve Collaboration – To obtain a comprehensive view of the extended supply chain and collaborate with business partners you should first be able to integrate with them Redesign Supply Chains – Having a collaborative information exchange process is core to being able to support global trading partners and ensure that supply chains are resilient in the face of volatile demand or unexpected supply chain disruptions Acknowledge the Opportunity of Elevating the Role of Your B2B Infrastructure – B2B infrastructures are in many cases still considered a commodity tool, but moving forward manufacturers will need to make it: ‘The central information exchange layer of the organization’ In summary, the study demonstrated that manufacturers can achieve hard benefits by improving their B2B related processes. In fact the study demonstrated that there was a strict correlation between having a pervasive, more modern and collaborative B2B platform in place and being a leader in supply chain performance. To get a better understanding of the analysis and to get IDC’s direct response to the findings from the study I would encourage you to DOWNLOAD the study and if you have any questions then please do not hesitate to contact OpenText. Over the next few weeks I will take a deeper look at some of the industry specific results from the study

Read More

Treat Contract Management as a Strategic Business Process

EDMS

It is well accepted that contracts are at the crux of every business, and the value delivered by underlying contract management systems can have a direct impact on an organization’s top-line revenues, costs and regulatory compliance (see related post: Contract Center: The Hat-Trick of Business Value ). Yet organizations often treat contracts simply as important legal documents that, once signed, are saved away in shared folders or content repositories (or even in filing cabinets) until a need arises to look for them. By the way, studies show a not-so-small percentage of these “safely stored” contracts are not found after they are stored. Contract management really needs to be looked at as a broader, end-to-end, strategic business process, rather than solely being the activity of aggregating contract documents within a content repository. And, like any organizational core competency or strategic asset, this business process—also referred to as the contract lifecycle—needs to be efficient, consistent, flexible, and built on best practices. It needs to be monitored, analyzed, and continuously improved. But different departments are concerned with their own types of contracts, and each one may have their own policies which are often undocumented or inconsistent, procedures which some may be manual or semi-automated, and people which may be disconnected or disorganized for dealing with drafting, negotiation, and renewal of contracts. A centralized and collaborative platform to manage all contracts transparently is desirable, but how could one fulfill the unique requirements (policy, procedure, and people) of each type, while also ensuring that every individual contract moves flawlessly from request through execution, or from enforcement through renewal? This is where a cutting-edge BPM platform can play a critical role. The newly launched Contract Center application fully harnesses the power of the OpenText Process Suite to orchestrate the contracting process, automate related workflows, execute rules, assign tasks, send reminders, enforce deadlines, track milestones, remove bottlenecks, and implement best practices for all contract types. Contractual terms and other information entered or generated as part of a contracting process can be used not only to build dashboards, reports, and integrations, but also to drive and optimize the progression of this process itself. Contract documents to be authored, negotiated, redlined, or executed are contextually and seamlessly integrated into each stage of the contract lifecycle, and securely governed in the OpenText Content Server which provides best-in-class ECM capabilities. So with Contract Center, contract managers and legal staff can quickly locate (or be alerted) and stay on top of their active or in-progress contracts at all times, and can rest assured that no contract will ever go out of sight. For more information on Contract Lifecycle Management (CLM) and OpenText Contract Center, view this recorded webinar, Three Pitfalls of Poor Contract Lifecycle Management—and How to Overcome Them.

Read More

Achieving ROI from Enterprise Archiving: Part 3 -IntensifyingPressurefor Compliance

 Meeting organizational, legal, and regulatory compliance obligations is a direct advantage of enterprise archiving. Your notion of compliance may differ based on your market, geographic regions where you do business, and—if you are considering adding cloud into your information management strategy—compliance even touches the physical location where you archive your corporate content. Organizational Compliance Even if your organization is not bound by the same regulations as industries such as financial services, pharmaceuticals, or the various levels of government, keeping too much content or over-retention is not a viable strategy. Business-relevant content needs to be managed and readily accessible to your users; content such as contracts, legal agreements, human resources documents, and more need to be classified as business records while content of a non-business nature, or “transient” content, should be managed and disposed of appropriately under policy as well. Compliance doesn’t need to be complex. Build straightforward policies for your business-relevant content and rules on how to handle both business records and non-business content, and stick to them. Adherence to policy is the best way to provide evidence that you’ve taken reasonable efforts to manage your enterprise content and met the duty to safeguard against sanctions or fines. Legal Compliance Historically speaking, the Federal Rules of Civil Procedure (FRCP) and U.S. regulators like FINRA have given rise to compliance-driven archiving. There are clear signs, however, that sanctions, fines, and growing legal threats are intensifying compliance concerns outside the U.S. as well. At the end of the day, regardless of your geographic region or market, it’s not a matter of if a legal event will occur, but when—and inadequate information management practices will cost you. Legal compliance issues can arise simply by not employing proper retention policies, which can be seen as negligent in the eyes of the court and result in spoliation sanctions. Some striking examples include:  In 2009, MetLife was fined $1.2 million for improper monitoring of email archiving obligations under FINRA In 2010, Piper Jaffray was fined $700,000 for improper retention of email In 2010, LPL Financial was fined $9 million for email system failures and compliance with FINRA In 2013, ING was fined $1.2 million for improper email retention and failure to comply with FINRA In 2013, Barclays was fined $3.5 million for email retention failures. In 2013, Boehringer Ingelheim was fined $931k for losing files including text messages related to Pradaxa drug trials EU Data Protection Regulation: fines up to €100m proposed In 2013, a federal court sanctioned the government for failing to meet its duty to preserve website content advertising for a $32 million Department of Veterans Affairs procurement This list highlights the need for proper information governance across not just email but all enterprise sources. The savings in fines, along with the damage to your brand, shareholder value, and reputation make the deployment of enterprise archiving an imperative investment for any organization. Regulatory Compliance There’s a growing list of regulatory standards that are being enforced across markets and regions. Enterprise archiving and information governance practices help organizations comply and meet regulatory requirements as they continue to evolve and intensify. Ask These Questions When Determining the ROI Related to Compliance: 1. What obligations is your organization bound by in terms of compliance? 2. What sanctions for information management have you or your peers been hit with? 3. Can you easily preserve potentially relevant and responsive content? 4. Does your organization feel over-retention is a concern or is a necessary price to pay for compliance? 5. If you’re considering a move to the cloud, can you meet data sovereignty needs and compliance under one roof? 6. What other sources should be archived to ensure full compliance? Manage and Protect Your Information with Information Governance Establishing an information governance practice helps organizations meet compliance obligations, reduce storage and operational costs, and mitigate legal risks. Establishing a tangible return on investment, however, is not easily quantifiable and depends on many factors—some unique to your organization. Information governance is about managing and protecting your information to make sure it’s working for—not against—your organization. OpenText solutions enable Information Governance to make it easy for your organization to maximize the value and minimize the risks of your information, as well as develop a blueprint for achieving return on investment. For more information on deriving ROI blueprints from enterprise archiving and information governance visit www.opentext.com/archive.

Read More

Did You Know That 80% of High Tech Companies are ‘High Adopters’ of B2B Integration Technologies?

A few weeks ago I posted a blog summarising the automotive related results from a recent B2B study that OpenText sponsored. The aim of the study was to see if there was a direct correlation between B2B integration and how it impacts supply chain performance. I will take a look at the CPG related results in my next blog but as I am spending this week in the heart of Silicon Valley over on the US West Coast I thought it only appropriate to discuss the high tech results in this blog article. We recently hosted a webinar with IDC to discuss the findings from the study. You will be able to get access to this and other downloads related to our study at the end of this blog. The global high tech industry is going through a major renaissance at the moment, new business opportunities being presented in the automotive industry, wearable devices and the internet of things sectors. In fact I would say that high tech companies are investing more in the internet of things related technologies than any other industry sub-sector at the moment, for example Intel’s investment in a new generation of chips for embedded devices. With all this focus on new investment areas it presents further opportunities for consolidation across the industry and only last week NXP semiconductors announced their intention to acquire their smaller rival Freescale Semiconductors. Continued M&A activity will present new challenges for B2B managers across the industry as they are forced to consolidate multiple B2B networks on to a single global B2B network. Increased regulatory compliance such as Conflict Minerals compliance is starting to be adopted by more regions around the world as a way of removing so called ‘3TG’ minerals from global supply chains. Increased regulatory compliance is driving a need for companies to think about how they manage their trading partner communities and how ultimately they should be working more collaboratively with their global trading partners. Finally this week will see high tech supply chains gearing up for the launch of the next big consumer must have gadget, Apple’s iWatch is finally being released. Apple is a past master at readying their supply chain for such product launches but it does nicely illustrate how the high tech industry has become so consumer driven in nature. So now let me discuss a few of the high tech related results from our study: 79% said they exchange B2B transactions electronically with their trading partners . I guess there is no surprise here that high tech companies have a high expectation to exchange business documents electronically with their trading partners. As with the automotive industry, the high tech industry is truly global in nature and in the case of semi-conductor chips they are manufactured in a multi-stage process that embraces many different production and finishing locations around the world. To try and encourage greater participation from its trading partners around the world, the high tech industry introduced its own highly successful XML based document format called RosettaNet which is still very much in use across the industry today. 58% said that B2B adoption had reduced their procurement costs. Greater visibility into the supply chain and in particular inventory locations around the world meant that high tech companies could reduce their procurement costs by being able to better optimise inventory from multiple locations around the world. In addition, the costs and time to manually process transactions across the procure to pay process can be reduced by providing high tech trading partners with the right B2B tools according to their technical capabilities. 54% said that shipment status was one of the most important B2B transactions in use across their industry today . Knowing when supplier shipments are going to turn up at the factory gate is crucial to the smooth running of today’s production lines. Connecting to a single, global, cloud based B2B platform such as OpenText Trading Grid provides the end to end visibility that high tech manufacturers require. It is not just improved visibility into the direct materials supply chain but also in the aftermarket repair business where field service teams need to know when spare parts will arrive, being able to tell a customer that their high tech product will be repaired by a specific date is key to improving customer satisfaction levels. 47% said that competing IT projects such as ERP were a barrier to starting B2B projects . Given that ERP projects such as a major SAP deployment are the most expensive and hence high profile IT project under the control of the CIO, it is no wonder that ERP projects tend to get 100% attention from IT resources during a roll out phase. Having all IT resources diverted to an ERP deployment can potentially disrupt other IT initiatives such as a B2B program for example. Then again I would argue that if 47% of high tech companies see ERP as a barrier to B2B adoption, I would say that during ERP implementation this provides the ideal opportunity to think about integrating ERP and B2B platforms together. ERP B2B integration is a key reason why many high tech companies have deployed our Managed Services platform to provide a single outsourced integration platform. So the barrier in this case certainly provides the opportunity for B2B integration. 42% said they processed invoices in real time with trading partners . In Europe for example, with 28 member countries of the European Union, there are 28 different tax compliance laws, 28 different ways to apply digital signatures and 28 different ways to archive invoices. If you are a high tech company based across the border in one of the Eastern European countries such as Slovenia then navigating your way through invoicing compliance in Western Europe is a complex process. The high tech industry is not only consumer driven but it is fast moving in nature and its suppliers need to make sure they can be paid quickly in order to make sure that they can fulfil orders to their numerous customers in a timely manner. Adopting B2B integration and in particular electronic invoicing can significantly reduce invoice processing times and by working with a company such as OpenText that offers electronic invoicing solutions it means that you can work with suppliers in any country, irrespective of the invoice regulations that may be present in these countries. In fact one further piece of analysis that we did as part of this project found that automating invoicing processes through the use of B2B integration technologies such as electronic invoicing had increased the speed of invoice processing by 156%. Overall, the high tech industry had the highest level of electronic B2B exchange of all the industries surveyed with nearly 80% being ‘high adopters’ of B2B integration technologies. As mentioned earlier this is due to the fast paced nature of the industry, with nearly 99% of high tech respondents performing two inventory turns per month, and the need to have a highly responsive supply chain network that can adapt to continually changing market dynamics. This is amplified by the diverse range of trading partners involved across the high tech supply chain, from contract manufacturers (who make products for many different customers) to distributors, and fabless semiconductor manufacturers to raw material providers. Exploiting new market opportunities over the next three years was one of the key initiatives being undertaken by high tech companies. 57% of South Korean respondents, of which a high proportion were from the high tech industry, said that supply chain complexity was a key barrier to B2B adoption, however I would argue that if companies chose a cloud based B2B platform then this would not only help to reduce supply chain complexity but it would help to provide the flexibility and scalability that the fast moving high tech industry urgently needs. If you would like to download your own copy of the new B2B study from OpenText then please complete the registration form here. When you have registered you will also be able to get access to an on demand webinar that we recently recorded with IDC, a copy of the webinar slides and an infographic that illustrates some of the key findings from the study.

Read More

Contract Center: The Hat-Trick of Business Value

A “hat-trick” is something good that happens in threes, typically three scores in a single game in hockey, soccer (football for those of you outside the U.S.), and cricket (for those really outside the U.S.). Being a bit of a sports fan, this is the first thing I thought of as OpenText launched our Contract Center today. Contract Center is a new contract lifecycle management solution that brings together our Process Suite BPM platform with our Content Server ECM system, and wraps those up in an application that includes contract lifecycle best practices, processing functions, workflows, and user interfaces. It is an enterprise-scale platform for sell-side contracts, buy-side contracts, and all types of legal agreements. The hat-trick is due to the fact that Contract Center provides three types of benefits to an organization, including top-line benefits, bottom-line benefits, and security/regulatory benefits. Many software solutions claim to have all three—but really don’t. Score number one includes the top-line revenue benefits of effective contract processing, especially on the sell side. Shorter contract development and execution times due to direct input of approved clauses and phrases, automated negotiation cycles with customers, notifications around contract renewals, and generally a more efficient, effective, and customer-friendly contracting process means buying relationships get off on the right foot and stay that way. Score number two is the cost savings, which are even more significant. Usually very highly paid legal resources are creating contracts, are involved in negotiation and amendment, and are joined by a significant amount of resources facilitating manual approval cycles, leading to a very labor-intensive and expensive process. In addition, Contract Center captures meta-data about contracts, so order volumes and renewals are tracked and alerts are sent so organizations can maximize the value of their contracts in terms of volume discounts, incentives, rebates, and on-time renewals. A side benefit of this automation—besides the savings associated with reducing time and people—is the transparency. With Contract Center, users can see exactly where a contract is at any stage in the process, and what is needed to move it forward. The third score to make the hat-trick is that many organizations have internal information governance mandates or regulatory requirements for contract processing, which are both *very* hard without an automated and flexible system. Contract Center is completely integrated with the industry-leading Content Server repository, which provides complete control of a contract at every stage of its lifecycle, and includes strict authentication and security, full auditability, version control, and retention and termination when appropriate.  Compliance and security are very expensive and inconsistent without a system such as Contract Center, and the penalties for not getting it right includes fines, suspensions, and the inability to produce documents for litigation—adding up to extended time in the penalty box for the management team. The system can be augmented with OCR/ICR functionally via our Capture Center product and contact composition via our StreamServe product. These are integrated out of the box and allow organizations to craft the right solution for their specific needs. Contract Center takes something very expensive and important for an organization and makes it automated and secure, and it’s flexible enough to meet the needs of a variety of organizations and contract types. With our ability to improve the top line, reduce the bottom line (together to increase profits), and manage compliance, I’m sure even Wayne Gretzky would be impressed. As the record holder for the most three-goal games in an NHL career (with 50 total), I’m sure he’s had some pretty big contracts in his lifetime as well.

Read More

Document is Empty: How Inaccessible Online Documents Impact Screen Reader Users – In Their Own Words

The Opening Video from Actuate’s Automation and the Changing Landscape of Section 508 event I’m excited to finally be showing this here. Imagine if you opened a web page or a PDF and the screen was blank. You scrolled all around the screen, hit several keys on your keyboard, wiggled and jiggled your mouse, even closed the document and opened it again. Where is the content that is supposed to be on this page? Where is the information I need that is supposed to be in this document? Well, that’s exactly what it’s like every day for screen reader users when they attempt to read documents that are not tagged for accessibility. For the longest time we have wanted to voice the frustrations that the blind and visually impaired encounter every day to those who have never experienced it. This past December we debuted a video at Actuate’s accessibility event in DC called the Automation and the Changing Landscape of Section 508. The video opened the event, which by the way was well attended by more than 100 people, including those who were blind or visually impaired and those who were not. For effect, we cut the lights and set the stage for those who had never used a screen reader program before to experience it firsthand. The laughter rolled as the audience, who instantly recognized the unmistakable default voice of Freedom Scientific’s screen reader JAWS, began announcing that the document was empty while showing just a black, blank screen. For the next several minutes individuals including a business owner, a student, a homemaker, a screen reader product manager, a 508 SME, a 508 tester, an advocate and others brought you into the world they face every day struggling for the same access and independence many of us don’t recognize we have. More importantly they share the impact. The feedback we got from attendees on the video was even more positive then we’d hoped it would be. I think we hit this one right on target. Now I finally get to share it with all of you. I’ll let you hear it straight from the people who are impacted most when it comes to accessible content – those that use it. Watch the video below to hear what they have to say. And, I’d love to hear your feedback! Document is Empty: The Screen Reader User Experience is the result of months of traveling North America, speaking to people from the visually impaired community. The goal of the video was to help organizations understand frustrations and impact of inaccessible information from the user’s perspective – a true UX. What many organizations don’t know is that there is technology that solves this issue, affordably and nearly effortlessly with automation. Actuate’s automated PDF remediation technology allows high volume and repeatable customer communications like statements, bills and notices, to be generated as accessible PDFs. This could be a real game changer according to those in the video – but you decide after you take a look. Those appearing in the video are consultants and accessible technology experts, but also regular people who aren’t involved with the technology issues at all – except that they live it every day. Like everyone, they use the web to go about their business, from accessing their benefits and health information to checking their financial information. And like everyone, they want to do that privately, without waiting weeks for a hardcopy accessible version they can read, or having to get a sighted person to look at it for them. The difference is that when organizations don’t provide this information in accessible online formats, they can’t do that. Their screen reader can’t access the information accurately, or often not at all. In popular screen reader parlance, the document is empty – or at least that’s how it appears to those relying on screen reader programs to read and consume documents that aren’t accessible. Creating accessible online documents – whether you’re a private company or a government agency – it’s not merely about meeting legislative compliance. It’s not simply checking all the right boxes to make sure you meet the regulations for accessibility. It’s about truly providing equal access to all of your customers, constituents or recipients who receive your services– including those who are blind or visually impaired.

Read More

Healthcare Organizations’ Email Security isn’t Making the Grade

So, it looks like a lot of healthcare organizations are flunking email security. According to the “state of email trust” survey cited in a recent Fortune Magazine article , healthcare organizations “severely lag” when it comes to securing email communications. In fact, the article states that an email, “purportedly sent from a typical health insurance company is, for instance, four times likelier to be fraudulent than an email that claims to be from a social media company.” A spokesperson from the surveying organization went on to state that “The poor folks in healthcare have traditionally not had much digital interaction. They’re the ones furthest behind by a country mile.” Considering the strict security compliance regulations in the space, this is disconcerting for the healthcare industry. The article went on to explain that only one of the 13 healthcare companies surveyed surpassed the ‘vulnerable’ category when it came to implementing three standard secure email protocols: Sender Policy Framework, or SPF, which checks emails against a list of authorized senders DomainKeys Identified Mail, or DKIM, which verifies the authenticity of a sender through encrypted digital signatures Domain-based Message Authentication, Reporting, and Conformance, or DMARC, which checks emails against a published record on a company’s servers, notifies the company of any potentially spoofed emails, and rejects suspicious emails as spam Fortunately, solutions like OpenText Secure Mail support these security protocols while tracking, encrypting and controlling the distribution of your secure email messages. One key feature of Secure Mail that might help some of these “vulnerable” healthcare companies is Data Leak Prevention (DLP). This capability limits access and transmission of sensitive information based on specific security policies. Features like this position Secure Mail as a strategic business tool for organizations looking to maintain the confidentiality of protected healthcare information.

Read More