Compliance

Building Supply Chain Resilience and Compliance

supply chain resilience

As a connected set of companies that form the link between individual component sources and a final product, the supply chain bridges the gap between suppliers and the ultimate end user. When it works, it is an impressive orchestration of many moving parts working in concert to deliver products to customers. But what happens when a critical link in the chain fails due to business disruption, natural disaster, financial issues, or a problem within its own supply chain? In today’s globalized economy, companies’ supply chains are growing bigger and more complex. While these business relationships can deliver gains in productivity and profitability, they can come at the price of taking on additional risk exposure. Third-party risk management is the fastest growing governance, risk, and compliance (GRC) technology market and cited as most challenging aspect of a compliance program [Deloitte-Compliance Week 2015 Compliance Trends Report]. In fact, 77 percent of manufacturing firms report increasing supply chain complexity as the fastest growing risk in business continuity. Organizations are looking to technology to help ensure supply chain resilience, with a fierce focus on protecting their organization’s brand, reputation, assets, and data. Big supply chains call for big data Supply chain executives are placing great store in the potential of big data. In fact, an SCM Chief Supply Chain Officer Report showed that they believe big data analytics to be more valuable than the Internet of Things, cloud computing, and 3D printing. More manufacturing firms are adopting big data strategies to tackle a wide range of risk factors within the supply chain, including, minimizing risk within a global supply chain and managing supplier performance. Tip – Choose a big data analytics solution that is meant for business users and analysts who want an easy, fast way to access, blend, explore, and analyze data quickly without depending on IT or data experts, such as OpenText™ Big Data Analytics. You’re a good corporate citizen. Are your suppliers too? It has been well established that having a clear, effective corporate social responsibility (CSR) program is good for business. Many customers seek out and want to do business with vendors who share their values and compliance culture. For example, by demonstrating that a company’s supply chain is conflict-free, it will reassure stakeholders that the company is compliant and will engender trust among suppliers, consumers, and others. The SEC Dodd-Frank Act, Conflict Minerals rules, and the EU REACH mandate and ROHS Directive are just a few regulations forcing companies to take a hard look at their supplier ecosystems. However, compliance is threatened when suppliers fail to provide needed information. Only 22 percent of companies required to file conflict minerals reports by a June 2014 deadline did so – most stating that their supply chains were too complex, or that suppliers did not respond to questionnaires or did not provide complete or adequate responses. Further, since mandatory reporting in 2014, more than 70 percent of U.S. companies say they still cannot make a determination that their supply chains are free from conflict minerals. Tip – Firms are turning to sophisticated information exchange solutions for supplier self-assessment to ensure compliance in areas such as conflict minerals, anti-slavery, and sustainability, such as OpenText’s Conflict Minerals Reporting solution. Managing risk begins with onboarding process Given the vast amount of supplier data that exists across the enterprise, technology offers an easy way to import, structure, organize, and consolidate this data in one place, and then map it to the associated supplier risks, regulations, controls, locations, and products for better visibility. And a successful supplier information management program starts with the right supplier onboarding process. Tip – For B2B suppliers who use a defined EDI format to send and receive data, these suppliers easily buy into an onboarding system which uses a format they already use (typically high volume and large suppliers), such as OpenText™ B2B Managed Services. When it comes to supply chain disruptions, it is no longer a matter of “if” it will happen, but “when” the next incident will occur. Choosing a proactive approach and the right technology solutions will only improve your organization’s ability to mitigate, adapt, and respond quickly to threats as they arise – thus strengthening resilience in your supply chain.

Read More

Quality and Innovation From Bench to Clinic – the Role of Data Integrity

quality

According to a recent PwC survey of pharma CEOs, 35% consider strengthening innovation a top priority. At the same time, they are facing increasing challenges from more stringent regulatory requirements and rising standards of quality. The Life Sciences track at this year’s Enterprise World looks at the benefits of building the quality-centric enterprise within Life Sciences. Data integrity is a foundation stone of quality and at the very heart of Digital Transformation. How can organizations gain visibility and control over all their data? Digital Transformation, for me, is not just about replacing paper with digital-based processes. It is about releasing the value that an enterprise has in the data it holds. The challenge is that most organizations have developed as a series of departmental silos. This is certainly true for quality. Separate departments or business units have created their own home-grown or legacy systems – often on Excel spreadsheets – to manage product and process quality. Apart from being hugely expensive to maintain and prone to error, these systems lack any enterprise-wide visibility, ability to share information across the organization and its partners, or to manage by exception. How reliable is your quality data? Despite best efforts, the data held within many of these quality systems must be considered less than reliable. Estimates indicate that bringing a new drug to market costs more than a billion dollars and takes up to 15 years so the effective use of quality data can help drive innovation and reduce costs. For example, a new product submission to the FDA can run to 600,000 pages containing data from a wide range of sources. The proper document management of that submission is essential to remove error and cost from the process. Companies are faced with regulatory requirements such as GxPs, reporting mandates, international quality standards and other compliance issues. This is simply part of daily operations and assuring the integrity of data feeding and driving these systems is paramount. This situation would be challenging enough if it were only internal data that the organization had to worry about – neatly held within enterprise applications and databases. However, every Life Sciences company operates in an ecosystem of customers, supplier and research partners. The amount of data is exploding and most of it is unstructured. According to The Economist, as of 2017, more than 1.7 billion healthcare and fitness apps have been downloaded onto smart phones. These apps are collecting mountains of valuable data that companies can use to improve product development and inform innovation. Towards the quality-centric organization To become a quality-centric organization, you have to be able to effectively many multiple data types from multiple sources. This data has to be reliable and available to drive quality through your entire ecosystem (See figure 1). Figure 1: Key drivers for the end-to-end, quality-centric Life Sciences organization Supplier Enterprise Customer Supplier audit Containment & failure analysis Supplier corrective action Cost recovery Supplier KPI/scorecard Inspection/audits Non-conformance CAPA Containment & failure analysis Document control & records management Change control Digital marketing & communication Training Equipment calibration Safety management Quality scorecard Customer issue & complaint management CAPA Containment & failure analysis Change control Document control & records management Product change notification Quality scorecard   It is difficult – if not impossible – to achieve this level of data integrity without a central platform to manage that data and content. With the Documentum acquisition, OpenText is uniquely positioned to provide the most comprehensive Enterprise Information Management platform to Life Sciences companies. Enterprise application integration, B2B integration and advanced analytics build out the ECM solution to give you new levels of visibility and control of the data across end-to-end business processes. This is the focus for Enterprise World. There will be speakers from some of the world’s largest Life Sciences brands talking about their experiences of driving quality and innovation within their companies. There are still a few spaces available so register today. It would be great to see you there.

Read More

Hands-on Technical Demo: Using Automation to Create Accessible Documents From Scratch

PDF accessibility

Those of you familiar with PDF accessibility know that it can be a tricky proposition to create accessible PDFs, especially using automated software that has so many different features and tools to learn and use. Getting the initial set-up right is critical, as thousands, if not millions, of documents are dependent on the accuracy of the template created to generate WCAG 2.0 Level AA compliant PDFs. If you’re an accessibility specialist wanting to understand how an automated solution can help you generate accessible PDFs and if you are interested in getting a step by step walk-through of creating templates for high-volume accessible PDF documents in OpenText™ Automated Output Accessibility solution, our webinar on May 30, 2017 is right for you. In this webinar, OpenText’s Accessibility expert and Director of Product Management Jeff Williams will demonstrate how to easily create accessible PDF documents from scratch. You will learn how to: Use the OpenText Output Transformation Designer to create a project from scratch Convert an inaccessible PDF document into a WCAG 2.0 AA compliant PDF Leverage the power of OpenText’s Output Transformation Field Technology, including: Content locators for capturing and tagging floating content Table identification and tagging Figure tagging and adding alternative text Managing alternative text in the Alternate Text Manager web application Don’t miss this unique, hands-on technical demo to learn how easy it is to create WCAG 2.0 Level AA compliant PDF/UA documents. Register today About OpenText Automated Output Accessibility Automated Output Accessibility is a unique offering that uses automation to address the demand for a low-cost, fast and simple process for converting large volumes of documents into WCAG 2.0 AA compliant format such as PDF/UA, Braille and large print. Comply with accessibility legislation such as Section 508 of the U.S. Rehabilitation Act, the Americans with Disabilities Act, the Accessibility for Ontarians with Disabilities Act, and other similar regulation.

Read More

Output Management As A Service? Yes, It’s Here!

Output Management

In today’s information-driven world, it can be a challenge to manage all of the content and data that is available from so many different sources, and ensure it is delivered to consumers in the appropriate format, on the right device, at the exact time required. Not to mention the limited budgets pushing IT to do more with less. It’s no surprise companies are turning to cloud-based solutions for help. According to Garter research, 70% of CIOs are looking to change their technology and sourcing relationships in the next 2 to 3 years, with 46% working with new vendor categories such as cloud. “Organizations are pursuing strategies because of the multidimensional value of cloud services, including values such as agility, scalability, cost benefits, innovation and business growth,” said Sid Nag, Gartner research director. “While all external-sourcing decisions will not result in a virtually automatic move to the cloud, buyers are looking to the ‘cloud first’ in their decisions, in support of time to value impact via speed of implementation.” Enter OpenText™ Output Extender, Cloud Edition. Output Extender, is an industry-first Output-as-a-Service offering that enables customers to run their output management needs under a SaaS service and subscription model for data and print stream transformation, multi-channel delivery and generating regulatory-compliant accessible formats. Organizations can automate, simplify and streamline their output management through this hosted cloud service that provides multi-channel assured delivery. Transformation services with job tracking, retry and device failover ensure your business critical documents are always processed for fulfillment, statement presentment and report distribution needs. Output Extender also supports regulatory compliance for high volume documents such as statements, bills, customer notifications by enabling the transformation of PDF documents into accessible format for visually impaired individuals, complying with regulation such as Section 508 of the U.S. Rehabilitation Act, the Americans with Disabilities Act, the Accessibility for Ontarians with Disabilities Act (AODA) and the U.K. Equality Act. OpenText™ Content Server customers can extend their EIM value proposition to mainframe content and other legacy reports and statements with Output Extender. Organizations can also use Output Extender to consolidate repositories and eliminate expensive term-based MIPS pricing while opening their content to the Content Suite EIM platform – all in the cloud without having to install or manage any additional software or infrastructure. The TCO (Total Cost of Ownership) for consuming cloud services like Output Extender is typically lower than running internally or purchasing perpetual licenses, plus organizations reap additional cloud benefits, including rapid time to market, lower infrastructure costs, reduced IT management responsibilities, maintenance free implementation and an easy-to-use interface with minimal configuration. Learn more about Output Extender.

Read More

Personalized Public Services Require a new era of Information Sharing

personalized public services

Citizens want to interact digitally with government. It’s not just the young. People of all ages are happy to have service delivered digitally as long as they are targeted, easy to use and secure. Personalization not only offers a better citizen experience, it can deliver the holy grail of agile, low cost operations. All it takes is a new era of information sharing between government, NGOs, businesses, communities and individual citizens. So, where do we go from here? It’s time to re-assess. I want to suggest that the first generation of digital Government – think of it as eGovernment 1.0 – is reaching its conclusion. We knew there was a need to deliver services digitally and we wanted to be able to provide them on the channel that the citizen prefers. It would radically improve citizen experience and make our operations more efficient. Well, there’s some good news and there’s some bad news. The good news is that digital adoption has been a success. Over 40% of respondents to a recent survey reported that the majority of their interacts with government were digital. Almost 90% stated that they want to maintain or increase their digital interactions. The bad news? Only a quarter of the people surveyed by Accenture were actually satisfied by their digital interactions with government. Consider that the respondents’ top five priorities included ‘the ability to have my question answered definitively’ (91%), to ‘be able to see the status of my request or activity’ (79%) and  ‘information organized by my need or issue'(69%) when it came to digital public services. It’s clear the investment made in digital government has yet to consistently deliver the level of information and personalization that citizens want. UK government minister, Ben Gummer has stated that although their digital services ‘delivered excellent web interfaces that better met user needs, back-office processes were often unchanged. In eGovernment 1.0, our focus on citizen experience – while perfectly justified – is failing to deliver the full benefits of Digital Transformation’. eGovernment 2.0 So what about eGovernment 2.0? McKinsey says ineffective governance; a lack of web capabilities and a reluctance to allow user involvement have held eGovernment 1.0 back. I’d like to add something a little more fundamental to that list: a model of information sharing at the heart of service provision and delivery. This is implicit in how the OECD defines ‘digital government‘ which, it says, relies on an ‘ecosystem composed of government actors, non-governmental organizations, businesses, citizens’ associations and individuals, which supports the production of access to data, services and content through interactions with the government’. This requires a new ethos for the sharing information in a sector where even different departments within the same government organization have jealously guarded their own turf. To fully benefit from digital government, the walls have to come down between departments and agencies while becoming much more porous when dealing with the private sector and the individual citizen. Personalized Public Services  In order to achieve the ambition of the personalization of service, governments have to move from the position of service provider to service facilitator or broker. The citizen needs to be able to self-select and self-manage if personalization is to be fully adopted. There has to be an acceptance that this is not something that government can achieve by itself – and, in fact, there are great benefits to be achieved in terms of cost of taking a partnership approach with citizens and private enterprise. We will see an increase in the co-creation of services as we move into eGovernment 2.0. There is plenty of evidence of it beginning to happen. The US Smart Cities open data initiative is a great example of encouraging government, the private sector, NGOs and citizens to collaborate and jointly develop solutions. Underpinning this collaborative approach to delivering co-created personalized services has to be a government platform that allows for the open and secure exchange of information. There has to be a means to centralize access to all content in order that all parties can access and interrogate all the information – both structured and unstructured data – surrounding an issue or service. While the current focus has been on the creation of ‘open’ data that anyone can access, reuse or distribute, there has to be a move towards an Enterprise Information Management approach  that can deliver the single view of service information. There are, of course, many challenges – not least the difficulty of sharing sensitive information between public and private sector organizations. The passage of the Cybersecurity Information Sharing Act to enable information sharing between public and private bodies on something as uncontentious as tackling Cybercrime shows the complexity opening the exchange of critical data. Next month, I’ll look at approaches to governance that can enable this new era of information sharing. In the meantime you can read further blogs on government-as-a-platform and agility, and digital transformation vs digitization.

Read More

AI-Enhanced Search Goes Further Still With Decisiv 8.0

Decisiv

OpenText™ Decisiv extends enterprise search with the power of unsupervised machine learning, a species of AI. I recently blogged about how Decisiv’s machine learning takes search further, helping users find what they’re looking for, even when they’re not sure what that is.   Now, Decisiv 8.0—part of OpenText™ Release 16 and EP2—takes the reach and depth of AI-enhanced search even further. Take Search to More Places In addition to being embedded in both internal and external SharePoint portals, Decisiv has long been integrated with OpenText eDOCS, enabling law firms to combine AI-enhanced search with sophisticated document management. Decisiv also connects to OpenText™ Content Suite, Documentum, and a wide range of other sources to crawl data for federated search. Decisiv 8.0 expands these integrations with the introduction of a new REST API. With this release, administrators can efficiently embed Decisiv’s powerful search capabilities into an even broader range of applications, such as conflicts systems, project management, CRM, and mobile-optimized search interfaces. Take Search Deeper Other enhancements in Decisiv 8.0 include a new Relevancy Analysis display, which shows researchers precisely why their search results received the rankings they did and even lets them compare the predicted relevance of selected documents. This enhancement helps researchers to prioritize their research more effectively and administrators to understand how the engine is functioning and being leveraged across the enterprise. New Open Smart Filter display options also help researchers benefit from using metadata filters to zero in on useful content. By opting to automatically show the top values in each filter category (left side of the screen below), administrators can educate researchers on how to use filters for faster access to the content they need, without training or explanation. Decisiv Goes Beyond Legal Decisiv’s premier law firm customer base leaves some with the impression that Decisiv is just for legal teams. In fact, Decisiv’s machine learning isn’t limited to any specific industry or use case. That’s because it analyzes unstructured content on a statistical basis, rather than taxonomical. (Surprisingly, sometimes lawyers do lead the way on versatile technology.) …and Decisiv Goes to Toronto Learn more about Decisiv Search and our other award-winning Discovery Suite products at Enterprise World this July. You’ll hear from top corporate, law firm, and government customers how their enterprises are leveraging OpenText’s machine learning to discover what matters in their data.

Read More

Bring Content Into Context of the EcoSystem with EP2 (SAP, Sharepoint, SuccessFactors, Salesforce, Oracle) and More

EcoSystem

Many of the OpenText EcoSystem products (quick reminder – EcoSystem products are solutions for SAP, Microsoft, Salesforce and Oracle) are part of the EP2 release, building on the foundation laid down last year with Release 16 and adding features requested by you, our customers to improve productivity and speed to implement. xECM Platform: the xECM family of products has grown from xECM for SAP, Oracle EBS and Microsoft Sharepoint this year with new products xECM for Process Suite and xECM for Engineering Documents all based on the platform foundation. You can find more details about these xECM versions, as well as the rest of the Enterprise Applications in the Content Suite EP2 launch blog here, but wait – don’t go just yet, there’s lot’s more to read here too! xECM the platform EP2 has new features to improve Workspace Creation from the lead application, better browsing and View integration as well as Batch Creation. Also, and probably most importantly, Completeness Check. This much requested feature from our customers ensures that a workspace is fully populated, be it a HR workspace for SuccessFactors, or an opportunity workspace for Salesforce. Of course, all of the features in the Core platform are available within the various version of xECM as well. xECM for SAP EP2 continues its market leading, Pinnacle Award winning (10 years in a row) integrations into SAP applications with support for SAP Hybris Cloud for Customer, offering full support for EIM into the Account, Lead and Opportunity workspaces integrated into the C4C UI. To support organisations moving to S/4 HANA Cloud, xECM now extends the CMIS interface to support the SAP CMIS profile. Finally, enhancements have been made to the Business content windows and to the way metadata properties are assigned – improving the implementation times and reducing the need for custom development. xECM for SuccessFactors EP2 adds some very impressive new capabilities including support for HCM on premise. This is critical for customers who have HCM and are migrating to SAP SuccessFactors as you can now view employee data from two systems connected to a single Employee File. Other features include document completeness validation – ensuring all documents are present when a workspace is created and finally, Duration management capabilities to schedule automatic HR document follow ups to aid in HR compliance (e.g. automatic reviews, escalations and expiry dates). These are just the top 3 features in xECM for SuccessFactors however there are many more. xECM for Salesforce EP2 is all about support for Lightning. With EP2 OpenText upgrades the widget integration to Lightning, Salesforce latest experience platform. Following Lightning design principles the integration of the OpenText widget can be flexibly configured to deliver modern enterprise apps. Building on the Lightning design also allows xECM For Salesforce Ep2 to be fully mobile xECM for SharePoint EP2 adds new capabilities to support search, browse and favourite Business Workspaces from within Outlook, making it seamless to add emails to a workspace both from the desktop and web interfaces. Also in EP2 is support for Office Groups integration allowing users to view group calendars, tasks conversations and notes into the Business Workspace and finally a new Search Widget allows Office 365 users to search natively for content in the xECM repository. xECM for Oracle EP2 receives a new coat of paint and now supports the new Smart UI as well as the classic UI, fully integrated into the Oracle EBS (12.1 and 12.2) UI as well as the platform capabilities outlined above. It’s not just xECM – EcoSystem products that offer specialised capabilities are also covered with EP2: Microsoft Application Governance and Archiving EP2 (AGA) is given the same capabilities as xECM for Sharepoint to support search, browse and favourite Business Workspaces from within Outlook, making it seamless to add emails to a workspace both from the desktop and web interfaces. Outlook is where collaboration happens, but it must not become a silo and the new EP2 capabilities helps solve that problem. As well as Outlook integration, AGA also includes the same Search capability to allow users to search inside the archive for content from Sharepoint. There are also some additional enhancements to simplify common tasks and configuration when archiving content from Sharepoint to the Archive Document Access and Archiving and Archive Center for SAP Solutions, Cloud Edition EP2 has enhanced support for the SAP Fiori UI with the new Business Object Browser, ensuring that all content is properly presented in context. In addition they contain more flexible layout in the SAP Business content window. Document Access has a new asynchronous report for record declaration to enable full text search of existing content. Recently licensed features of Archive Center Cloud Edition are CMIS interfaces and file share archiving and additional enhancements to the Archive Center application SAP DAM EP2 (just to remind you this is the number 1 rated DAM solution on the market, and the only solution preferred by SAP Hybris – in-case you forgot) has an all new Workflow engine to reduce the time and complexity in setting up the platform with a more lightweight workflow engine. In addition to a host of new UI features, SAP DAM is further integrated into the new Customer Experience capabilities of Hybris 6.x releases, and SAP DAM 16 EP2 exposes new widgets to ease integration into Hybris smart edit. We also gain advantage of the improvements made to the OTMM platform, on which SAP DAM is based and extended (you can read about some of these here) Document Presentment does have an EP2 release, however there are so many new capabilities within this product, it deserves a blog in it’s own right – to follow shortly Employee File Management EP2 received some minor updates and fixes. For customers who are wondering where the EP2 launch is for VIM and Business Center – don’t worry, we’ve got you covered in the EP3 launch later in the year. EP2 for the EcoSystem continues to expand on the foundation of Release 16, making xECM the platform even more the future of ECM, by bringing Content into Context, as well as making the other applications in our Portfolio even more mission critical. And keep reading our EcoSystem blogs for details on upcoming events including a South African Customer Day, Digiruption, SAP Sapphire and Enterprise World 2017.

Read More

Crashing the ‘Gentleman’s Club’ GDPR in Financial Services

GDPR in Financial Services

The EU’s General Data Protection Regulation (GDPR) comes into force on May 28, 2018. Any organization whose customers include EU citizens will be affected. GDPR is the most far reaching data protection legislation so far created and is set to impose new levels of rigor of business process and data management capabilities within Financial Services firms. PWC recently described the approach of many firms to data transfers as a ‘gentleman’s club’ of informal agreements. This is only one area of Financial Services where GDPR will dramatically change business operations. GDPR gives EU residents unprecedented control over their personal data – and personal data is defined so widely it includes web behaviors and cookies and anyone can request this information from any organizations with which they interact. A quick look at the insurance industry with its historic under-investment in systems highlights the challenge where firms don’t really know what data they hold, where it resides and how they’re currently using it. I don’t want to go into the details of GDPR legislation – you can access some great resources here, but I will take a brief look here at some important implications for Financial Services. Consent and transparency The issue of customer consent is perhaps the hottest GDPR topic for Financial Services. Consent, as defined by GDPR, must be ‘freely given, specific, informed and unambiguous’ (GDPR para 32) and often ‘explicit’. Your customer must know why they are giving consent, what they are consenting to and that they have given consent. You can no longer gain consent for one thing and then use the data for a range of other applications. Each data use will need an individual consent. There is still some discussion whether ‘implied consent’ or ‘legitimate interest‘ can still be used to defend the use of personal data. It is safer to use the GDPR approach to ensure that you are compliant and not caught up with details in constant litigation. Transparency becomes key when dealing with customers to ensure that you can defend consent that is ‘informed and unambiguous’. Financial Services firms will need to re-visit their customer-facing contracts. Your contract terms must be plain and understandable. If a regulator can suggest that the contract imposes too high a degree of technical knowledge, it is unlikely that they will agree consent was given under GDPR. Data Transfers and data supply chain Although consent is gaining the attention, data transfers must be an area of major concern for Financial Services. Modern organizations have established what can be described a data supply chain and you will now need visibility and control of how third parties – clients, suppliers, brokers and partners – use personal data of your customers. Where data transfers are necessary, you must manage the risks inherent in these transfers and ensure that your customer’s details are properly protected by these third parties because, in many cases, you will be more responsible for their breaches than they are. New contract terms will need to be created to manage third party relationships to mitigate this risk. If we accept PWC’s description of data transfers as a ‘gentleman’s club’ then GDPR represents a good opportunity to reassess your Information Governance structures. The ability to control the acquisition, management, retention and disposal of all information – both structured and unstructured – across your business operations helps reduce risk. Sound information governance can facilitate compliance with GDPR and your other regulatory requirements. Data portability and the right to be forgotten EU residents will now have the right to receive all their personal data, that they have previously given, in a commonly used and machine-readable format. The key idea is to be able to switch service providers with ease. Even though Financial Services firms in many EU countries are already used to porting data between suppliers, the breadth of information – beyond demographic and account information – requires that all organizations will need to ensure they can bring all customer information together. The real reason for this approach comes under the right to data portability – an individual has the right to demand that every instance of information on them, held on every business application, portable device and communications system, back-up server and Cloud service that your company uses, to be transmitted to another data processor “without hindrance”. The ‘Right to be Forgotten’ means that the data subject can also ask for you to remove all the personal data that you hold on them. If you cannot justify holding the data then it has to be deleted. This is likely to impose a new normal. The current approach for most Financial Services firms is to hoard data to exploit its value across customer experience and business operations. Now firms will have to hold as little information as possible, for as short a period as possible and delete the information as soon as possible. Big data and ‘profiling’ This raises the question whether GDPR signals the end of Big Data. There is little doubt GDPR imposes specific constraints on ‘profiling’ which it defines in GDPR Article 4(4) as ‘any form of automated processing of personal data … to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements’ Leaving aside the growing use of predictive analytics to drive personalized customer services, many established business processes such as the insurance industries use of telematics in the underwriting process is potentially under threat. As customers continually show a demand for personalized service, Financial Service firms need to find a way to continue Big Data activities in a way that is GDPR compliant. The first approach will be around ‘explicit consent’ so that you have the right to use data for the specific purpose. However, another area to examine is the use of ‘anonymization’ and ‘pseudononymization’ of data as a means to retain personal data beyond its primary use for deployment in trending and locational analysis. If you haven’t started to properly prepare for GDPR then you may already be behind and opening yourself up to the danger of major fines. You’ll need to appoint a Data Protection Officer, establish a cross-functional team to access the full impact of GDPR across your organization and provide a comprehensive and honest audit of where you are today. You’ll also need to consider exactly how you are managing the data and content within your organization. The days of multiple instances and duplications of unmanaged customer information in various applications and databases is over. A centralized strategy for the management of enterprise information – underpinned by a flexible and scalable Enterprise Information Management (EIM) platform – is needed to ensure GDPR for large, global Financial Services organizations.

Read More

Clarity First. Clarity Always – Managing Marketing Content in the Life Sciences

life sciences marketing

Marketing from the Life Sciences industry is constantly under the spotlight. US state and federal governments have handed out some eye-watering fines to pharmaceutical companies for false claims. Yet, consumers, physicians and Life Sciences companies all want the same clarity and transparency in the information delivered. Life Sciences marketing management systems need to change before the industry loses its most important asset: stakeholder trust. Operating within any heavily regulated industry is challenging and the penalties for non-compliance are rightly severe. When people’s lives are at stake, it’s clear that the marketing information has to be reliable and trustworthy. A 2016 Public Citizen showing that pharmaceutical companies had paid over $35 billion in fines over the last 25 years demonstrates that these standards have not always been achieved. However, research suggests that the public retain high levels of trust in the marketing information they receive. Kantar Media suggests that 86% of adults have had some form of medical test and 66% have had an annual physical as a result of being exposed to TV advertising. In addition, Harvard University found that over three quarters of people felt that pharmaceutical companies did adequately explain the side affects and risks of their drugs. So, most people think that Life Sciences companies are doing the right thing. They just want to be sure they can rely on what they’re being told. The limitation of modern marketing systems This is where the marketing environments within many Life Sciences companies – especially large global organizations – are currently acting as an impediment. They are constraining the agility companies need in order to fully grasp the opportunities in innovation and market conditions. They are inhibiting the ability to deliver excellent and consistent customer experience in an increasingly omni-channel world. More importantly, the lack of end-to-end control and visibility across all marketing activity and assets leaves huge potential for the type of error or over-sight that can lead directly to huge fines. The situation is totally understandable. The last decade has seen an explosion of sales, marketing and creative solutions. The result is siloed marketing ecosystems where many solutions that are incompatible with each other. Sales enablement, marketing automation, social media management, creative production systems and more all handle vital, sensitive information – almost always without any centralized control. Project management systems are often localized and provide little or no integration into the other marketing and creative systems. Add to this the need to collaborate and share information with partners and external agencies – frequently on a country-by-country basis – and the full scale of the challenge becomes apparent. The holistic approach to Life Sciences marketing What is required is a change in thinking. Life Sciences companies have to move away from a project-based tactical approach to marketing – focused primarily on campaign delivery – to a more strategic approach around the effective management and optimization of all the company’s digital assets. OpenText calls this Marketing Content Management. Marketing Content Management enables a Life Sciences company to take complete centralized control of all its digital assets and marketing activities across the entire global organization and its extended marketing supply chains. It brings together all the disparate systems that currently form the marketing ecosystem and allows the organization to take a holistic quality approach to the marketing information lifecycle for the first time. Embedded analytics help companies assess the efficiency of their processes as well the effectiveness What is most important about this approach is the level of control that the company now has. It can now ensure that information is up-to-date and correct as it passes through the marketing process. Policies and procedures can be put in place to manage all digital assets from initiation to disposal. In addition, information can be securely shared with partners and agencies. The organization can ensure that everyone works to its standards and adheres to its policies. This delivers a new level of brand protection as the marketing department will have full visibility of how its marketing materials are amended and deployed by trading partners such as resellers and distributors. Marketing Content Management eases the burden of regulatory compliance on the Life Sciences marketing organization. It delivers the transparency and auditability that means the company can ensure the information within this marketing activities is correct and reliable – and it is easy to provide the information should a regulatory agency require. It is the foundation upon which customer trust can be built and maintained. Download our infographic on the 10 Best Practices for Life Sciences Marketing Content Management to take the first steps toward improving marketing quality and process harmonization.

Read More

When Search Meets AI, It Takes You Further

AI

Many think of AI as human-like robots or human-sounding voices that interact with actual humans, but the real value of artificial intelligence may derive from what it’s doing behind the scenes. For example, when unsupervised machine learning (a species of AI) is applied to enterprise content, it can develop a deeper understanding of that content and deliver insights to humans in instantly valuable ways. Content is about more than mere words; it’s about how words interrelate. By analyzing the statistical co-occurrence of terms across enterprise content from a range of sources, machine learning forms sophisticated models that can take search much further. It can identify concepts, extract phrases, suggest better queries, and pinpoint internal SMEs with relevant expertise—whether or not their written profiles even reflect it. AI-infused search helps people find what they’re looking for, even when they’re not exactly sure what that is. This is what OpenText™ Decisiv is all about. “Asia Not Asia” Imagine that you’re looking to understand more about your business operations in Asia. Your first instinct might be to search your various data stores for documents containing “Asia,” but will that term necessarily appear in the most valuable content? For that matter, how many documents will list city, state, country, and/or continent? Without machine learning, you would need an exhaustive search taxonomy to address all these variations. When a Decisiv user types “asia,” the system instantly and automatically retrieves documents that (1) contain the word Asia and/or (2) are conceptually related to Asia. This not only casts a wider net, it provides results based on a more sophisticated relevancy analysis than simple keyword searching could hope to deliver. Typing in the seemingly illogical query “asia not asia” is illustrative; it shows the user content that is conceptually related to Asia, but doesn’t contain actually contain the term Asia—displaying only the documents found by unsupervised machine learning above and beyond keyword search. Concept grouping categorizes documents according to linguistic patterns that we humans have a hard time identifying across large volumes of data. Machine learning makes such conceptual analysis automatic and highly scalable, and humans get the benefit. Further Still Concept groups can also propel a researcher to new areas of useful content. By looking at a list of the key concept groups that appear in response to a search or metadata filter (along with the document counts for those groups) researchers can see an overview of the content that’s available, spot pertinent aliases, adjust their search terms, and include or exclude concepts to meet their objectives. Concept groups are displayed with characteristic labels (top words and phrases that appear in those groups) to easily provide a sense of what each group represents, and how useful it’s likely to be for a given search. Take Decisiv Action Register for OpenText’s annual user conference, Enterprise World, in Toronto this July to learn how AI-enhanced search can help empower your digital transformation. You’ll hear from top users of Decisiv Search and our other award-winning Discovery products how they’re leveraging machine learning for more effective enterprise information management. All humans welcome.

Read More

Discovery Rises at Enterprise World

This summer will mark a full year since Recommind became OpenText Discovery, and we’re preparing to ring in that anniversary at our biggest conference yet: Enterprise World 2017! We’re inviting all of our clients, partners, and industry peers to join us for three days of engaging roundtables, interactive product demos, Q&A with experts, a keynote from none other than Wayne Gretzky, and—of course—the latest updates, roadmaps, and visions from OpenText leaders. Here’s a sneak peek of what to expect from OpenText Discovery’s track: The Future of Enterprise Discovery. We’ll be talking at a strategic and product-roadmap level about unifying Enterprise Information Management (EIM) with eDiscovery. New data source connectors, earlier use of analytics, and even more flexible machine learning applications are on the way! Introduction to eDiscovery. Our vision for the future of eDiscovery is broader than the legal department, and we’re spreading that message with sessions tailored for IT and data security professionals that want to know more about the legal discovery process and data analysis techniques. Why Legal is Leading the Way on AI. Our machine learning technology was the first to receive judicial approval for legal document review, and in the years since, we’ve continued to innovate, develop, and expand machine learning techniques and workflows. In our sessions, we’ll highlight current and future use cases for AI for investigations, compliance, due diligence, and more. Contract Analysis and Search. We’ll also have sessions focused exclusively on innovations in enterprise search and financial contract analysis. Join experts to learn about the future of predictive research technology and the latest data models for derivative trading optimization and compliance. Our lineup of sessions is well underway and we’ve got an exciting roster of corporate, academic, government, and law firm experts including a special keynote speaker on the evolving prominence of technology in law. Register here for EW 2017  with promo code EW17TOR for 40% off and we’ll see you in Toronto!

Read More

The GDPR and Why Digital Marketing Will Never be the Same

We know that the General Data Protection Regulation is giving Compliance and IT some heartburn as these teams work to understand the GDPR’s new requirements and how it will affect their organizations. But perhaps the biggest impact will be to Marketing; specifically digital marketing, which will require a cultural shift that presents challenges, but for smart organizations, opportunities to succeed as well. Consent is king The days of implied, sneaky, and bundled consent are gone. Starting in May 2018, brands have to collect active consent that is “freely given, specific, informed and unambiguous” to be compliant with GDPR. Someone provided their email address to download a whitepaper? If they didn’t actively agree that it is okay to use their data to send marketing messages, it won’t be legal to add those email addresses to your mailing list. Also, because there is no “grandfather clause” for data captured before the GDPR, we expect to see lots of re-permissioning campaigns to establish clear consent to use the personal data they already hold. The GDPR will change how gated assets are used, how leads are collected, and how referral programs work. In other words, the method of “collect it now and figure out what to do with it later” will become a high-risk strategy. The challenge for marketers will be providing “granular choice” for consent in a way that is minimally intrusive and not detrimental to the customer experience. Legitimate interest is not a get-out-of-jail-free card The GDPR states that “legitimate interest” of a controller can provide legal basis for using personal information without obtaining consent (GDPR Article 6.1(f)). However marketers should use this clause with caution. Legitimate interest can only be invoked provided that there is “no undue impact” on data subjects. In other words, a business that intends to use personal information must balance its legitimate interest against the rights and interests of the individual and bears the onus for demonstrating such. Personalization…and privacy – consumers want it all A recent study found that 90 percent of consumers have privacy concerns, but also seek highly personalized and tailored customer service. Personalization is key to modern customer experiences and customers make purchase and loyalty decisions based on the level of individualized service they receive. This introduces a challenge for many businesses and marketers – in order to provide highly personalized offerings they need to have a better understanding of their customers’ needs, purchasing histories and attitudes. That means collecting, analyzing and managing customer data related to these preferences and behaviours. However, it has also been found that consumers have growing concern over their privacy and the use of their data. Marketers will have to find ways to comply with the GDPR while continuing to deliver the personalized products, services and customer experiences that their consumers demand. Pseudonymization – Marketing’s new hope? The EU has been explicit that the GDPR should facilitate – not inhibit – innovation within business. In fact the regulation calls out “freedom to conduct a business” as one of the fundamental rights it respects. The tracking and analyzing of consumer behaviors and preferences are valuable tools that marketers and sales functions rely on to be successful. The process of pseudonymization may provide a way for regulators and businesses to meet in the middle. The GDPR defines pseudonymization as “the processing of personal data in such a way that the data can no longer be attributed to a specific data subject without the use of additional information.” It is a privacy-enhancing technique where directly identifying data is held separately and securely from processed data to ensure non-attribution of that data to an individual. As it turns out, controllers don’t need to provide data subjects with access, rectification, erasure or data portability if they can no longer identify a data subject. Organizations should look to technology tools as means of pseudonymizing or masking consumer data and encrypting personally identifiable data, in combination with organizational process changes, to ensure compliance. It’s May 2018. Do you know where your personal data is? A majority of businesses have stated that they are not ready for the GDPR. A big reason for this is the potentially onerous requirement for organizations to be able to quickly assemble a data subject’s personal data upon request for purposes of erasure, rectification or export. According to a recent GRPR Readiness survey, only 26% of respondents currently keep an up-to-date register of the personal data they hold and the purposes for which they are used. If there was a time to get one’s arms around all the personal data they hold, what type of permission was obtained, and a governance structure to manage it, that time is now. Information classification schemes, data storage methods and records retention programs need to be reviewed to ensure that data portability, removal, or correction is not only feasible but efficient, if and when needed. How OpenText can help The GDPR is a game-changer for digital marketers and there will be challenges to overcome, however the game can change in their favor too. Yes the days of “data maximization” and blanket consent appear over. But it’s for those very reasons that the GDPR will lead to new marketing opportunities. The GDPR forces businesses to develop more thoughtful approaches to targeting and lead acquisition. Prospects who opt in are better qualified, more engaged and want to be marketed to. Because consumers have more control over how their data is used we’ll see better quality relationships between businesses and prospects. OpenText™ Enterprise Information Management (EIM) solutions help organizations meet regulatory requirements and should be central to your overall GDPR compliance and data protection strategy. According to Forrester, “77% of consumers have chosen, recommended, or paid more for a brand that provides a personalized service or experience.” Utilizing Workforce Optimization solutions within our Customer Experience Management portfolio, we can provide sentiment analysis to help measure the effectiveness of your marketing campaigns; provide guidance on appropriate promotions to communicate based on whether or not the consumer has given consent. Learn more about our solution here. Stay tuned for our next blog post in April on “Disrupt Yourself – Personalized Marketing in the Age of GDPR”. You can also read some of our previous blogs on this topic: Five 2017 Compliance Challenges GDPR and EIM GDPR – Opportunity or Threat for B2B Discovery Analytics and GDPR

Read More

Data Protection in the Information Age – What Questions Should I ask?

data protection

“Keep it secret, keep it safe” While most you, I hope, recognize this line from Peter Jackson’s Lord of the Rings, The Fellowship of the Ring, as Gandalf’s charge to Frodo regarding the One Ring, I submit this line represents the primary goal of information security in today’s age of information. The ocean of the blogosphere and twitter-verse is awash with wave after wave of the opportunities available to organization’s able to capitalize on their digital assets by harnessing the power of analytics engines, fed by robust business networking solutions. Check these blogs out for some wonderful examples. 2016 Data Breaches set records But these waters are not always safe.  Googling ‘2016 data breaches’ yields more than 5.6 million results in less than ½ a second. Bloomberg contributor Olga Kharif writes 2016 “was a record year for data breaches.” From the DNC, to LinkedIn; from the IRS to SnapChat; from Wendy’s to Yahoo; it’s clear that pirates sail the waters of the Information Age.  And the pirates may be getting bigger and bolder.  On Mar 22, the  WSJ reported  “Federal prosecutors are building cases that would accuse North Korea of directing one of the biggest bank robberies of modern times, the theft of $81 million from Bangladesh’s account at the Federal Reserve Bank of New York last year.” So how can today’s digital organization successfully navigate these waters?  How can CIO’s, CISOs, and other C-level executives be comfortable their own harbors won’t crumble under the next attack?  As more and more data inside the enterprise originates outside the enterprise, what about the defenses of those external harbors in one’s digital ocean?   More urgently as more and more business data applications move to cloud based solutions, what questions do I need to ask to be comfortable my data is kept both secret and safe? Questions to “keep it secret, and keep it safe” When evaluating current or prospective solution providers here are the basics questions you need to ask your provider, if not your own internal team, about how your data is secured. Will you show me you’ve thought about this before? This question goes to the Information security policies, certifications and audits in place.  Is there a framework of policies and procedures which include all the necessary controls in an organization’s Information Risk Management processes?   Are these processes certified against ISO 27001 or NIST etc.   Do you undergo regular external audits?  Can you provide copies of your SSAE-16 SOC1, SOC2, and/or SOC3 reports? Where is it? This question speaks both to network typology and architecture as well as to the physical and environmental controls of the locations where your data is stored and processed.  What firewalls are in place? Is there a DMZ?  Are proxies used to move data from the DMZ into the processing applications?  If stored is the data encrypted? How does it get there? This question speaks the controls surrounding data transmission.  Are secure protocols used? Is the actual data being sent also encrypted or digitally signed? Who can see it? This question speaks to access control.  The goal is the only the right people can see the right information at the right time for the right reasons. Here is where you want to ask if multifactor authentication is used?  Is there Data Leakage Protection in place? How do you know? What monitoring – automated and manual is in place?  Are access points secured by Unified Threat Management tools?  What about Intrusion Prevention?  What’s the process when an incident is detected, or even suspected? How do you keep up? The only constant in the information age is change.  From the amount of the data being created – IDC estimates the digital universe is growing at 40% per year – to the ever increasing and changing nature of cyber threats.  How does the organization stay current?  What is the policy and process for applying patches?  What level of technical debt is in place  (what version of the hardware and software components are in place) This is by no means an exhaustive list of questions, but these are some of the essential ones to ask.  And good answers to serve to keep the pirates at bay.

Read More

General Data Protection Regulation (GDPR) – How can Customers use OpenText and SAP for Timely Deletion

GDPR

In part 1 of this blog, we discussed what the General Data Protection Regulation (GDPR) means for enterprises and how data and content, which is generated and stored in the course of day-to-day business processes in SAP is subject to this regulation. Our example was the incoming vendor invoice on paper, which is scanned, attached to the SAP transaction via ArchiveLink and then securely stored on the OpenText™ Archive Center. This paper invoice may contain a contact name of the supplier, a phone number, an email address, all data that when combined together could identify an individual, such as an employee of the supplier. This personal data is protected by GDPR. Let’s recap: Collecting and processing data is legitimate as long as it serves a justified purpose, as defined by GDPR, “if data processing is needed for a contract, for example, for billing, a job application or a loan request; or if processing is required by a legal obligation …” Justfied purposes for storing and retaining personal data include laws that govern retention of content, such as tax relevant data and documents, where retaining the scanned vendor invoice or a customer bill is not only justified but an obligation. BUT: When the legitimate reason for the procession has expired, the transactional data and the attached ArchiveLink document need to be deleted. In our example above, the scanned vendor invoice needs to be retained as long as taxation laws require, but be deleted just after this retention period, which is 10 years in Germany for example. This means that enterprises are advised to set up retention rules to govern the necessary retention AND put processes in place that will delete data and attached content in a timely fashion, when it is no longer needed, or when the justified purpose for retention has expired. Retention Management for SAP® Data and Related Content Neither OpenText nor SAP can provide legal advice or guidance in this matter, but they do offer software capabilities that help customers set up policies and procedures for retention and deletion of transactional data and attached content. The products that play together here are SAP® Information Lifecycle Management (SAP ILM) and OpenText™ Enterprise Content Management solutions for SAP: OpenText™ Archiving, Document Access and Extended ECM for SAP Solutions (see OpenText Suite for SAP). SAP ILM provides records management for SAP data and can also be configured to apply the same retention schedule to the attached SAP ArchiveLink documents. However SAP ILM itself does not provide the storage for data and documents but relies on ILM aware platforms for this purpose. OpenText Archiving, Document Access and Extended ECM provide the compliant ILM aware platform for ILM data files and ArchiveLink documents. These solutions store the content, enforce the retention and holds from ILM and pass it up to the hardware level, and, at the end of the lifecycle, execute the deletion request coming from SAP ILM. SAP ILM acts here as leading application for the retention management of SAP data and attached ArchiveLink documents. So far so good, if you only look at SAP data and attached ArchiveLink documents. Enterprise Wide Records Management However, personal information in business documents does not stop at the boundaries of the SAP applications. You will also have content outside SAP, which you want to retain and manage, put under records management and execute timely deletion when the reason for retention has expired. This is where Extended ECM for SAP Solutions comes into play. Extended ECM provides DoD certified records management for SAP ArchiveLink documents as well as NON-SAP content, which can be related to SAP business objects via the ECMLink module. A customer that wants to benefit from the DoD certified records management for documents can use Extended ECM for all unstructured content inside and outside SAP, whereas SAP ILM provides the records management for SAP data. If SAP ILM is to delete data which relates to Extended ECM content that has not yet expired, both solutions can synchronize, so that business documents in Extended ECM will not be orphaned by SAP ILM. At the same time, Extended ECM represents the ILM aware storage platform for SAP data and documents. So SAP ILM together with Extended ECM for SAP Solutions can manage the retention of data and unstructured content inside and outside SAP. Where to Find More Information Learn more about OpenText’s capabilities to support GDPR requirement by reading our other blogs here and here. You can also visit our main web site and learn how OpenText EIM offers capabilities that support customers to prepare for GDPR or listen to our webinar.

Read More

Five Compliance Challenges Facing Your Organization in 2017

compliance challenges

2017 is turning out to be a tumultuous year for compliance. A combination of Brexit, a Trump presidency and the reform of EU privacy rules has put regulatory change and uncertainty back into the spotlight. Mega-size fines have returned too and compliance officers worry about personal liability more than ever. 1. The GDPR – the countdown is on If your company hasn’t familiarized itself with the General Data Protection Regulation (GDPR) yet you may already be behind. The GDPR was ratified in May 2016 and designed to bring personal data protection into the digital age. It imposes stringent requirements about how companies store and handle the personal data of EU citizens. The regulation will have far-reaching impacts – from how organizations obtain consent, use cookies on their website, to giving teeth to the right to be forgotten. Don’t think that, as this is EU legislation, that GDPR won’t affect you. It affects any organization that collects and stores personal data of EU citizens. With the GDPR becoming enforceable in May 2018, the countdown is on for organizations to prepare. The GDPR will impact more than just the Compliance team but indeed many other parts of the business. Key Steps An important first step is to have clarity of the personal data processing practices and content within your organization, including: • What personal data you process? • Where it is stored across the organization? • Who has access to it? • What consent has been provided and where it is documented? • Where it is transferred from and to (including to third parties and cross-border)? • How it is secured throughout its lifecycle? • Are there policies and processes in place to dispose of personal data? Visit OpenText GDPR to learn more about the regulation and how OpenText can help. 2. Pressure on the Compliance function not letting up Compliance officers have never had a higher profile than they do now but with great power comes great responsibility. Pressure on the compliance function has been steadily increasing and 2017 is no exception. For example, sixty-nine percent of firms surveyed in 2016 expected regulators to publish even more regulations in the coming year, with 26 percent expecting significantly more. In addition, personal liability appears to be a persistent worry. Sixty percent of survey respondents expect the personal liability of compliance officers to increase in the next 12 months, with 16 percent expecting a significant increase. In addition, with the GDPR comes the rare explicit requirement to appoint a qualified compliance role, the Data Protection Officer (DPO). Though the GDPR does not establish the precise credentials DPOs must have, it does require that they have “expert knowledge of data protection law and practices.” Key steps Compliance officers don’t need to be technology experts but need to know how to leverage governance, risk and compliance solutions to make their jobs easier. Other key steps include ensuring your policy framework is up-to-date and that staff understand and are trained their compliance responsibilities. Read the AIIM white paper and infographic: Managing Governance, Risk and Compliance with ECM and BPM. 3. A new administration means changes in regulatory priorities President Trump has been clear and consistent on his desire to reduce the amount of regulations in place. From financial services to the environment, compliance officers are bracing for the changes and what it will mean for them. Most industry experts agree that even where regulations are streamlined or reformed, there will be plenty of work for your team to do to address the vacuum left by previous regulations or to interpret the way the new regulations need to be applied. The picture may be uncertain at the moment but you can be certain that regardless, any changes means there’ll be work to do for your Compliance team. Key steps How do you prepare for the unknown? Many pundits advise wisely that it’s business as usual and not to re-draft policies and procedures just yet. Now’s a good time to evaluate your overall compliance program however. For example, if your organization does not have its regulatory information management house in order now is the time to clean up. Whether your firm is based in or works with the United States, the result of the potential changes to the regulatory landscape means that businesses will need to be adaptable in order to quickly take advantage of opportunities, mitigate risks, and stay in compliance. Learn about OpenText compliance solutions. Continue to read compliance challenges 4 and 5 on page 2.

Read More

The 3 Most Asked Questions about Fax Technology in Healthcare

healthcare

Freshly back from HIMSS 2017, I spent some time reflecting on the rich conversations that I had with tradeshow attendees. These top three questions were so consistent among the conversations that I wanted to share them, just in case you missed them at HIMSS: Q: How are healthcare organizations using fax solutions to save costs or be more efficient?  A:  Two words: Simplify and Optimize. Fax and paper continue to dominate patient information exchange, accounting for as much as 90% of all exchanges. First, it’s important to simplify their faxing by eliminating the security and compliance risk of standalone fax machines and manual faxing and replace them with a secure, digital fax solution. This eliminates unnecessary paper and the costly, time-consuming task of manual faxing. Second, healthcare organizations should optimize their faxing by integrating their digital fax solution with Electronic MR systems, MFP devices, document management systems or other healthcare applications. By integrating electronic fax with the devices and applications they use the most, healthcare providers get access to the right patient information when they need it and where they need it. Q:  What trends are you seeing with fax technology in healthcare? A:  There are 2 major trends in healthcare today:  Fax volumes are rising (yes, you heard me right) and hybrid fax deployments.  First, fax volumes are rising.  As more patients enter the health system, attributed to more people having affordable access to healthcare and the healthcare needs of the aging population, fax volumes increase, too. The second trend is the shift to hybrid fax deployments, which combine an on-premises fax server with cloud-based fax transmission. Hybrid fax deployments are becoming more and more popular because they simplify existing on-premises fax server deployments and allow healthcare organizations to leverage the cloud for just the transmission of the fax. In addition to simplifying the deployment, the on-premises fax server keeps its integrations with EMR systems, MFP devices, and other healthcare applications and there is no change to the user experience or established patient information exchange workflows. Q:  Where is fax technology headed and how is OpenText innovating in healthcare? A:  As other forms of patient information exchange develop, such as Direct messaging and other forms of electronic exchange, it’s important that fax technology evolve to coexist with these new forms of exchange because fax is so deeply rooted in healthcare. When fax coexists with other forms of exchange, it allows healthcare organizations to begin to transition to new forms of exchange at their own pace, or as importantly, at the pace of other providers in the healthcare continuum, with minimal or no change to the user experience (or better yet, make the user experience better!). For example, OpenText has recently launched an innovative healthcare solution that combines fax and Direct messaging in a single solution, allowing healthcare organization to convert an outbound fax to a Direct message whenever possible with no change to how they send a fax today. I’m already looking forward to HIMSS 2018 and the great conversations we will have then!

Read More

Regulatory Matters: Collaboration is key for Life (Sciences) in 2017 – Part Two

Regulatory

The Life Sciences sector is very innovative. The Boston Consulting Group found that almost 20% of the world’s most innovative companies came from the sector. In fact, PwC suggests that Healthcare will surpass Computing as the largest industry by R&D spend by 2018. Shining a light on the innovation paradox Yet, for all the effort, there is still a lack of new products. Last year marked a six-year low for new drug approvals by the FDA. The rise of treatment-resistant superbugs has shone a light on the fact that there hasn’t been a completely new antibiotic for over 30 years. The poor return on R&D investment explains the paradox between innovation increase and new product decrease. Deloitte found that returns on research and development investment at the top 12 pharmaceutical companies fell to just 3.7 percent in 2016 from a high of 10.1 percent in 2010. While many Life Sciences executive remain upbeat about the development of new medicines, it’s clear that two factors will drive success: achieving improved operating efficiencies internally and creating more strategic alliances externally. The Internet of Things will increase the focus on cybersecurity In 2014, the Financial Times found that cyber security for the healthcare and pharmaceutical industries worsened at a faster rate than any other sector. As the sector becomes more and more IT driven in terms of innovation, R&D and manufacturing, cyber crime has been increasing in areas such as intellectual property (IP) theft, international espionage and denial of service attacks. As the sector looks to embrace digital transformation and the Internet of Things (IoT), cyber security is likely to be top of every CIOs priority list. The trend towards preventative and outcome-centric models relies on the ability to monitor and measure the health of individual patients. Whether wearables or other intelligent medical devices, the requirement for some form of online connectivity creates a vulnerability. At a recent cyber security conference, experts showed how items such as an insulin pump can be hacked. This represents a real threat to the individual but also raises the possibility of devices such as pace makers being used to launch denial of service on other targets. Addressing cybersecurity concerns, the FDA has issued guidance to medical device manufacturers to mitigate and manage cybersecurity threats. The excitement around IoT has to be tempered with the need to deliver water-tight security. This stretches way beyond the ability to gain access to user devices. It has to encompass data in transit and the management and storage of data within the life sciences company itself. Security-by-Design – built into all OpenText solutions – will become a foundational element of every part of the IT infrastructure for healthcare and pharmaceutical companies. Achieve operational efficiencies to improve margin and time to market With the focus firmly on value-based medicine, personalized care and population health, the Life Sciences sector is experiencing new levels of convergence and collaboration. Companies have begun to transform their business operations through collaborative product development and new service development. The ‘not invented here’ model is no longer appropriate for increasingly complex and expensive product lifecycles. As Deloitte points out: “Collaborating throughout the product development lifecycle is becoming an increasingly common and effective way for biopharma and medtech companies to offset mounting R&D costs, funding shortfalls, increasing disease complexity and technology advances”. In 2017, life sciences companies are transforming their traditional, linear supply chain into a supply chain of dynamic, interconnected systems that integrate their ecosystem of partners. This new supply chain modality allows organizations to extend their value chain beyond product development into the enablement of care in an increasingly outcome-based healthcare environment. By creating a secure, open and integrated supply chain, organizations are able to reduce cost, increase quality and manage risk across the partner ecosystem. It provides the foundation to quickly and easily extend the partner network for Life Sciences. As you evaluate your business strategies and priorities over the next 12-18 months, collaboration with trusted partners like OpenText can prepare your organization for the challenges ahead. Contact me at jshujath (@opentext.com) to discuss how we can help. If you missed the first blog in this two part series, you can view it here.

Read More

General Data Protection Regulation (GDPR) – What is it and how Does it Impact Enterprise Information Management

GDPR

In May 2016, a new EU Regulation and Directive was released to govern the protection of personal data, the General Data Protection Regulation (GDPR). It will enter into force after a two year grace period in May 2018. This is just little more than one year to go and enterprises need to get active to evaluate what it means for them and how they need to prepare. As stated on the European Commission website: “The objective of this new set of rules is to give citizens back control over of their personal data, and to simplify the regulatory environment for business.” Data protection laws are nothing new in the European Union. However, the new GDPR rules presents some significant impacts and changes to current data privacy regulations. For one, what used to be a directive, is now a regulation with full force of the law, valid across all EU countries. And despite BREXIT, the UK government has confirmed that UK will implement GDPR (read the UK Information Commissioner’s blog on this topic). The other important aspect is that GDPR now imposes substantial fines upon individuals and enterprises that do not adhere to the law. Minor breaches will be fined up to 10 Million EURO, or up to 2% of the total worldwide annual turnover of the preceding financial year for a business, whichever is higher. Major breaches will be fined up to 20 Million EURO, or up to 4% of the total worldwide annual turnover of the preceding financial year for a business, whichever is higher. And it should be re-emphasized that the turnover is not just the turnover of the EU located part of the enterprise, but the worldwide turnover of the enterprise. Protecting Personal Data of EU Citizens – What does that mean? As GDPR protects the personal data of the citizens of the European Union, it imposes duties upon enterprises, that collect and manage personal data. These entities are called “Data Processors”. Data processing entities located in the EU are subject to GDPR, but also companies outside the EU that process personal data of EU citizens. So the regulation also applies to non-EU enterprises: EU GDPR requires compliance outside of the EU as well (EU GDPR applies for non-EU companies with contact points to the EU). Collecting and processing data is legitimate as long as it serves a justified purpose, as defined by GDPR, for example “if data processing is needed for a contract, for example, for billing, a job application or a loan request; or if processing is required by a legal obligation …” Such justified purposes for storing and retaining personal data are, for example, laws that govern retention of content, such as tax relevant data and documents, where retaining the scanned vendor invoice or a customer bill is not only justified but an obligation. What is the relevance of GDPR for Day-to-Day Business Processes? There is personal data processed and stored during the course of day-to-day business processes that relates to business partners, such as customers and suppliers, in the procure-to-pay processes as well as order-to-cash process. To give some concrete examples, let’s now take a look at an enterprise that uses SAP ERP to manage their processes and OpenText to attach business documents to these processes. It is of course not just about the data created and stored in the SAP database of the leading enterprise application (ERP, CRM, …), it is also about the business documents that are captured during this process. Take for example, an incoming vendor invoice on paper, which is scanned, attached to the transaction via ArchiveLink and then securely stored on the OpenText™ Archive Center. Or in the example of an order-to-cash process it an incoming sales order and delivery note to a client, which are linked to the SAP order and stored in OpenText. May 2018, GDPR will start to apply following a two-year transition period to allow the public and private sector get ready for the new rules. So how should enterprise prepare and get ready for GDPR? With regards to aspects of storing personal data for a justified purpose, enterprises need to set up policies and procedures – not only to retain content as long as they are obliged to do by law such as taxation or product liability laws, but also to delete content in a timely fashion when it is no longer needed respectively the justified purpose for retention has expired. Learn more about OpenText’s capabilities to support GDPR requirement in the SAP environment in a forthcoming blog post, and also by reading our other blog entries here  and here. You can also visit our web site and learn how OpenText EIM offers capabilities that can support customers to prepare for GDPR or listen to our webinar.

Read More

Regulatory Matters: Collaboration is key for Life (Sciences) in 2017 – Part One

Life Sciences

Life Sciences, like life itself, is constantly evolving. The rigid, product-based environment of complementary but discrete healthcare specialists is rapidly being replaced with a fluid ecosystem where growing and global value chains and strategic alliances drive innovation and price competitiveness. Secure collaboration is key as Greg Reh, Life Sciences sector leader at Deloitte says: ” All of the pressures that life sciences companies are under, be they cost, regulatory or operational, in some way shape or form can be de-risked by adopting a much more collaborative approach to R&D, to commercialization, to manufacturing and distribution”. As increased collaboration touches every part of a Life Sciences business, there are a number of trends that will affect most companies during 2017. Prepare for uncertainty in the compliance landscape There has been a great deal written about the affect that the Trump administration will have on regulatory compliance.  Amid all the uncertainty, Life Sciences companies can’t take a ‘wait and see’ attitude. One thing we do know for certain is that new legislation and regulations will keep coming. Whether the pending regulations on medical devices in the EU  or MACRA  (the Medicare Access and CHIP Reauthorization Act) in the US, regulatory change does not stand still – not even for a new president! We also know that there is greater focus on enforcement. According to law firm, Norton Rose Fulbright, almost one third of all securities class actions in the US in 2016 were against Life Sciences companies, a figure that had risen in each of the previous three years. The company noted that 56% of claims in 2014 were for alleged misrepresentations or omissions. In response, companies have been placing focus on effective marketing content management to develop appropriate quality control on promotional and advertising materials. In addition, enforcement is becoming more stringent is areas such as TCPA and FCPA – where last year the global generic drug manufacturer Teva International agreed to pay $519 million to settle parallel civil and criminal charges. Within extended value chains, compliance becomes an increasingly collaborative process to ensure that information is available to the regulators. However, in compliance, collaboration is working both ways. Life Sciences companies need to be more collaborative as global regulators and enforcement agencies are already cooperating with each other. As global regulators and agencies share information and work together, it becomes even more important to manage compliance risk across the organization and beyond. Consumer price sensitivity continues to drive value-based pricing models According to Statista, the sales of unbranded generic drugs almost doubled between 2005 and 2015. In Japan, the government has an objective of substituting 80% of branded drugs with generics by 2020. There is increasing price sensitivity within both the buyer and regulator communities. Within many economies, the depressed fiscal environment limits the potential for healthcare spending. Governments and insurance companies want to shift payment for product sales to patient outcomes. In fact, the U.S. Centers for Medicare and Medicaid Services (CMS) wants 90% of all Medicare payments to be value-based by 2018 . This value-based pricing model places extra burdens on drug companies but also offers opportunities for the organzations to maintain the profitability within branded drugs. It provides the opportunity to look ‘beyond the pill’ to look more at the patient and what they’re doing. This requires end-to-end evidence management systems that exploit the masses of data created through managing patient outcomes to deliver value-added services around patient wellbeing, rather than simply selling more or more expensive drugs. At OpenText, we would expect most digital transformation efforts to include an element to enable the correct environment for value-based pricing, especially as operational efficiencies and time to market are improved. Part Two of this blog is available to read here.

Read More

Why Lawyers are Adopting AI Faster Than You

AI

When you think of bold, innovative users of transformational technology like artificial intelligence (AI), you naturally think: lawyers. It’s obvious, right? Risk-averse, measured, charge-by-the-hour, brick-and-mortar professionals that parse the written word (“Heretofore? No, hitherto!”) and deliver cautious, nuanced advice (“I didn’t hear your question, but regardless my answer is: It depends.”). Who better to make practical use of today’s cutting-edge AI? (“Alexa, draft an amicus brief in support of my motion in limine. Please.”) Lest the irony be missed, the legal industry is deservedly notorious for being a technological step or two—or more—behind its clients. Yet law firms and savvy corporate legal teams have been pioneering the use of artificial intelligence since the last decade. There is not a litigator of note today that hasn’t heard of Predictive Coding or Technology Assisted Review. These terms refer to the use of machine learning to mimic an attorney’s decision-making in the context of legal discovery, the process of identifying and reviewing up to millions of documents to determine which must be produced to the other side in litigation or an investigation. Predictive Coding can mean dramatically faster and more accurate document analysis and review. Why are lawyers leveraging AI for document review? Big Data: The growing amounts and kinds of data generated by workers—in office programs, cloud apps, chat systems, shared workspaces—means an ever-increasing challenge for legal and compliance officers. To them, all of this work product is potential evidence. Bigger cost: Of the more than $200B spent on litigation across the US annually, 70% is spent on discovery, and 70% of that discovery spend goes to document review. So, anything that can accelerate or reduce review means substantial savings for corporate clients. Irrelevant content: No one likes reviewing irrelevant data. (Imagine if you had to carefully read your junk email before deleting it.) Front-loading relevant content makes document review more engaging for attorneys, which improves their productivity and accuracy. The need for speed—to insight: Over 95% of civil cases settle, as the uncertainty and cost of a trial is generally to be avoided at nearly all costs. Finding the evidence that proves or disproves your liability early on is key to negotiating a favorable settlement. Think Netflix or Pandora on steroids. Predictive Coding is about finding more like this, where this is a piece of unstructured data (an e-mail, slide deck, letter, memo, etc.) and the more like are documents that are conceptually similar—even though they may not contain the same words that made this relevant in the first place. Documents that are similar in concept but use substantially different language can be equally significant for litigation and investigations. That’s why Predictive Coding goes far beyond traditional Boolean keyword search. To enable Predictive Coding, the system performs statistical analysis on the co-occurrences of all the words in each document ingested, even across millions of documents. It then creates sophisticated models around a handful of documents judged by attorneys to be relevant to the issue under review. It looks across the data set and finds more documents closely related to those models and suggests them to the attorneys for priority review. As attorneys review the suggested documents and label them relevant or irrelevant, the system gets smarter, refining the document models for even better results in the next round. With Predictive Coding, attorneys can find virtually all the relevant content in a data set by reviewing just 10-30% of it, shaving off weeks or months of tedious review and surfacing critical evidence far faster. What OpenText is doing about it: In 2016, OpenText acquired Recommind, a pioneer in advanced analytics for the legal industry for over 15 years. With unparalleled Predictive Coding and other unique capabilities, OpenText™ Discovery Suite helps enterprises discover what matters in their data—faster and more accurately. 2017 is poised to be a banner year for legal technology, as awareness and experience with Predictive Coding are approaching critical mass. Our vision is to see machine learning used to add value to every matter, on virtually every data set. After all, who better to drive technological innovation than your venerable counsel?

Read More