Enterprise Content Management

The Truth About Indexing: Making the Most of Indexing During Your ECM Migration

You have an Enterprise Content Management (ECM) migration project underway, but you’re not happy with the current indexing structure of your archives. It turns out you’re in luck, because an ECM migration can be the perfect time to consider changing or enhancing the current indexing structure of your archives. As you’re adding to your indexes, you can enhance them as well and improve what’s already there. Why is indexing important and an ECM migration a good time to restructure? Keep reading to find out. Intro to Indexing ECM repositories can store millions of statements, policies, images and other content. To keep all of that straight, they rely on metadata that describes the documents stored in the system. That metadata can be located in databases, control files, or can be appended to the document contents themselves within the source system. The metadata is used for many purposes including discovery, validation, storage, organization, retrieval, distribution, delivery and deletion of content. Indexes help the system search out that metadata. Individual document types typically possess a unique set of indexes to describe their contents: an annual financial statement may be identified by searching for a customer’s name and a year, while a monthly statement could be associated with an account number and month. Understanding those differences helps recreate the relationships between the documents, the target ECM system and the connected business applications. Indexing is a crucial part of the ECM migration process, building in metadata enrichment, searchability and performance enhancement. During a migration, all indexing information has to be maintained and migrated to the target ECM system. Sometimes additional metadata is also needed in the target system, in which case it can be acquired, potentially by mining the content during the migration process to extract index information and meet the target system requirements and business use. It’s important that everything lines up so that specific items can be found and delivered as needed. For that to happen, the right structure has to be in place. Re-Indexing and Improving Your Current Structure Other indexing issues can arise during an ECM migration. Metadata associations may be lost during the extraction phase, for example, a problem that’s particularly prevalent in legacy systems built in the days before XML, complicated by the fact that different document types often require different metadata rules and indexing requirements. To get around this, it is often necessary to rebuild indexes during the migration process through a technique called “re-indexing.” See an example of a re-indexing task within Process Manager on Figure 1 below. Re-indexing involves adding and/or changing your current index structure – maybe because you want to enhance the performance of your current searching ability, or to add more indexes that would assist in tying together more document types during a search. The goal is to enhance the end user experience and make old legacy archives more usable and efficient. And that comes back to why it makes sense to alter your problematic indexing structure during the migration process. Because you’re enhancing and adding to the indexes anyway, it’s the perfect time to change index tag names for consistency, programmatically add indexes to the documents by pulling information from external sources, and error check to ensure index fields are what they were intended to be. You can also redact indexes for Payment Card Industry (PCI) Compliance issues (i.e. masking parts of the credit card number or social security number) during this time. The migration process gives you an opportunity to do all of this, lining up your indexes so metadata is in order and everything is easy to find. For more information on indexing, as well as the other stages of the ECM migration process, read “ECM Content Migration: Best Practices in Document Archive Convergence,” a white paper co-written by Actuate and AIIM, the global community of information professionals.

Read More

2014: Three Trends in the Energy Sector

Energy engineering

We recently reviewed the 2014 predictions for the Oil & Gas and Utilities sectors from both IDC Energy Insights and Ovum. There are some fascinating changes taking place: the United States is set to become the top Oil & Gas provider by 2015, and China and India will combine to deliver almost 40% of the world’s new electrical generating capacity. A closer look at the challenges impacting the operations of Energy companies reveals that much remains the same, however, there are some nuances worth discussing. 1 – Better Information Management Eases the Pain of Compliance Both Ovum and IDC correctly predict that compliance requirements will significantly increase, with regulators like OSHA, FERC, NERC and EPA leading the way. In the nuclear arena, NRC-related regulations will continue to reign supreme, and will take on prominence as countries like China increase their nuclear generating capacity. After Fukushima, it’s hard to argue that it’s a waste of time to triple-check how we execute the most critical processes. Unfortunately, many companies continue to act only when reacting to a major challenge, such as an accident, major fine, or unplanned shutdown. They rely upon a significant trigger event to gain the company-wide support they need to seriously address compliance. And, in some cases, even a significant trigger event isn’t enough to get them to act, exposing the company even further to future accidents, fines or shutdowns. For companies that consider compliance proactively, there is hope. As companies improve information management and control their critical documentation, they are able to reduce the time and effort required to meet compliance mandates. And in the process, they also limit the risk of accidents, injuries, fines, and even production shut-downs. 2 – Global Projects and Plant Operations Will Expand Security Requirements Whether it is to protect confidential plant layouts, operating data, or the engineering drawings required to fix gas pipelines, security will remain a constant business requirement in this sector in 2014. While the analysts identify the drivers for increased security, it’s a mistake to focus exclusively on locking down access to your documents. Instead, companies should start with enabling access: getting the right information to the right people using the right devices. Access has to go hand-in-hand with security. Information systems must make entitled content quickly accessible, while locking down content from outsiders without approval. And the need to access information is clearly growing. For instance, companies are increasingly looking for mobile access to documents, replacing the old process of printing work orders, standard operating procedures, and engineering documents. By thinking about access and security together, you can ensure the plant operates efficiently, protect your valuable information, and react more quickly to needs of today’s workforce. 3 – Innovation Will Address an Aging Infrastructure But what about getting beyond the “have to” and improving competitive advantage through more efficient operations? The analysts are spot on with the insight that successful companies will gain competitive advantage through more efficient operations of their aging infrastructure, but they should have included the “graying” workforce, as well. Today, too much critical operational and maintenance information remains “tribal knowledge,” with vital information locked away within a workforce rapidly approaching retirement. This can be even scarier than the security issues discussed earlier. Having an enterprise-wide information strategy brings older plant documentation, including as-builts, into a consistent and comprehensive system, providing newly hired technicians and veterans alike the ability to maintain aging plants and facilities. Companies that embrace new technologies, automate like crazy, and introduce standardization to everything they touch will be better poised to get past “making things work” to truly innovate and find competitive advantage in the process. Over the next several weeks, we will be talking much more about each of these trends, how our customers are approaching these challenges, and what technologies are making solutions possible. But let’s hear from you. What will you do differently in 2014? Are compliance, security, and operational excellence your top three trends?

Read More

Is it a Bird, Plane, or Business Model?

Life Sciences

2014 Predicts Dramatic Change in the Life Sciences Industry – more than many of the industries we enable through technology, Life Sciences is evolving in dramatic fashion. Almost as breakthrough as the drug delivery shift from swallowed tablets to stick-on patches, the industry’s business model is also changing its form. The main driver? Cost pressures. It’s estimated that approximately 90% of current IT spend in the industry is just keeping the lights on, and a mere 10% is invested in innovation according to Gens and Associates 2013 Benchmark Study. As our first two 2014 predictions relate, with these kind of numbers, it won’t be business as usual in Life Sciences moving forward. 1 – Efficiency First For our first prediction, we foresee a fanatic focus on driving efficiencies in pharmaceutical research and development, including the clinical development process. Why is it our top priority prediction? Primarily because the historical model of developing drugs simply cannot be maintained any longer. Overly time-consuming or capital-heavy development just can’t be supported at the other end of the market – buyer-side, or through the pricing at which pharmaceuticals are sold. Life Sciences customers will need to implement smart, efficient best practices to survive, by leveraging technology innovations. Rather than developing new, massive home grown IT systems, which require extensive and repeated regulatory revalidation, customers will embrace a simpler approach. System configuration, not system customization, will achieve similar business results, as one example. A content management platform will integrate with existing systems across the broader clinical development ecosystem, as another case in point, rather than triggering bloated custom integration work. As the efficiency mindset takes hold, we expect customers to increasingly rely on smart technology. 2 – Collaboration is Contagious Our second prediction highlights another dimension where change is afoot: outsourcing. Unlike traditional outsourcing, where accountability was limited, or risk was less shared, tight collaboration in Life Sciences will emerge as an evolved form of outsourcing. As the cost pressures continue, we predict Life Sciences will work even more extensively than today with third parties to efficiently execute key business processes. This may include getting drugs approved faster through more efficient clinical trials, as partners agree to exchange clinical findings. Or using outsourced experts to navigate select regulatory processes. Some envision contract manufacturers becoming even tighter business partners, sharing the risk. As part of this prediction, we’ll add one other interesting twist — collaboration will become contagious. What’s not obvious from the spreadsheet point of view is that once Life Sciences partners recognize how much easier and productive new collaborative solutions are, they will want the same technologies and business models. And they’ll want them even faster. Regulatory affiliates in the R&D space and investigators, for example, will not stand by to watch colleagues or competitors get their work done in days instead of months. They’ll start to question why they carry paper files, while their colleagues submit findings through an iPad app. They’ll turn to their own internal submissions team, and expect the same level of performance as the best collaborative partner. Collaboration will breed further collaboration.

Read More

Intensity: The Word of the Year in Financial Services

Financial Services

2014 Market Drivers & Predictions – as our car door mirrors used to say: “Images shown are larger than they appear.” Similarly in the financial services industry (FSI), we see some familiar trends moving into the new year. But their intensity is much larger than many realize. The Current Environment – Market Revival Economic revival and growth mark the road we see developing ahead for Financial Services. Arguably we’re still in a delicate state globally, but all key indicators point to a market recovery. With this market revival comes a unique set of requirements. Not necessarily new ones, but certainly requirements with a greater focus and intensity. The unrelenting drive to acquire and retain customers, increase share of wallet and regulatory compliance continues to remain top of mind. So too does the insatiable desire to reduce costs and increase profitability. Unfortunately, as banks and insurers execute their expansion plans, they’re putting tremendous strain on existing IT & service infrastructures and associated business operating models. The strain only becomes more intense as the growth and competitive forces accelerate. Given these market forces moving into 2014, we should see three key outcomes take place: 1)  Automating Compliance – Moving to the Next Generation of Information Governance We’ll see an increasing use of automation to address compliance and escape regulatory fines, lengthy audits, and reputational risk. In 2014, the volume, complexity, and pace of change of regulations will be greater than in previous years.  Scrutiny on decision-making, regulatory reporting, and compliance readiness in FSI will be – in a word – “intense.” Consequently, there’s an immediate need to automatically collect all relevant regulatory policies (globally, regionally & locally), analyze information risk across all content across the enterprise, and enforce the right management policies on a continuous basis – going well beyond traditional records/retention management.  Interest and adoption will be fueled by the promise of saving literally millions of dollars in reduced fines and operational costs. 2)  Big Data – Accelerated Adoption via Proven Results The well-publicized Big Data trend will evolve from theory and hype to broad adoption, steeped in proven business results. Insurers are already up-selling policies, by noticing when household teenagers reach driving age and other life events, as well as providing dynamic policy pricing via telematics. Banks are also now introducing rewards programs based on spending patterns and other insights. The use of big data will accelerate as it becomes ingrained in core transactional applications making it a business and competitive necessity. 3)  Modernizing the IT infrastructure – Becomes A Business Imperative Throughout 2013, we’ve witnessed ATM networks going down, credit card data compromised, and extreme transaction delays – most during peak seasons. This has given the industry a rude wake-up call: you simply can’t delay IT improvements a second longer; the infrastructure has reached its breaking point! Thus, we’ll see an increase in core infrastructure investments, both in off-the-shelf packaged applications, digitization of business processes, mobility and cloud solutions. Similar to Big Data, we’ll see a dramatic transition from exploratory moves to organizations placing bigger bets on cloud offerings. Granted, the pace and scale of this transition will differ by sector within FSI. However, few will resist the allure of substantial cost savings and rapid delivery of new products and services to a demanding and competitive market. Also, a new standard for engaging customers will emerge – one that’s collaborative, mobile-aware, and highly personalized. Gracefully welcoming customers and seamlessly connecting them to core business processes, will eliminate traditional sources of customer frustration and provide the next product/service differentiator.  Consequently, organizations will aggressively seek ways to easily and securely sync & share sensitive information between customers, approved agents and other third parties – anytime, anywhere, with any device. Lastly, the need for rapid transformation will drive demand to quickly and safely decommission legacy applications, while protecting sensitive data. All told, proven cost savings, risk mitigation and time to value will accelerate investment in these modernization efforts. Arguably, we’ve just scratched the surface for what we can expect in 2014.

Read More

Key Trends for Government in 2014 – Cloud and Mobile ECM Solutions


This new year will bring exciting changes in how government agencies will be able to leverage cloud and mobile technologies for improved efficiencies and better citizen access to government services. The barriers to government adoption of cloud-based Enterprise Content Management (ECM) solutions will continue to drop in 2014. Issues relating to procurement process, contractual vehicles, and security concerns will be resolved such that government agencies can take better advantage of the cost and resource savings provided by cloud-based ECM solutions. Many government agencies will make their first attempts to move specific content centric applications to a cloud model. They will start small with an application where they can test the waters with low risk. A good example of this type of initial application would be something like correspondence management, or a public records request process where they will be able to leverage the cloud model for improved efficiencies and better interactions with the public. Government agencies will continue develop or purchase mobile device applications to support two primary functions: Citizen Interactions – the public wants to be able to interact with government agencies via mobile phones and tablets.  “Driver’s license renewal?  There’s an app for that!” Field Worker Support – building inspectors, case workers, public safety workers all desire to work more easily and efficiently.  The ability to access and enter data from mobile devices will continue to grow to support these types of workers. The current generation of government mobile applications is data centric and allows users to interact with data residing in databases from mobile devices. What is lacking in these applications is the ability to interact with content such as documents, images, voice and video content. It is great to be able to quickly retrieve the case data from a mobile device, but how to view the case documents and submit new photos or videos is not addressed by the current generation of these apps. When asked whether their mobile app supports this type of content, application developers  and providers seem to have few answers and some have suggested that such things may not be possible with currently available technologies. I recommend that these people need to better understand our content solutions for mobile devices as these capabilities are absolutely available now and mobile apps can deliver not just the case data, but the case documents, pictures and video also. This is where we are headed in 2014. What are your plans for ECM solutions in the cloud? How are you addressing the field worker and citizen access through mobile devices?

Read More

2014 Outlook for Patient Care – Electronic Health Records


Our upcoming blog series will assess and project how healthcare IT (HIT) innovations, national priorities and even demographics may affect the healthcare industry in 2014. Temperatures Will Rise for Electronic Health Records (EHR) Adoption Fever has struck with the trend of utilizing healthcare information technology to vastly improve patient care and enhance quality. The first assessment in our HIT innovations blog series focuses on how the rapid pace of EHR system adoption will only accelerate. While quality benefits are primarily leading providers down this path, cost management benefits add to our optimism that EHR adoption will accelerate. By now, we all know recognize that ability to access, share and optimize patient information is critical to delivering high-quality care and lowering costs. But with new care delivery and reimbursement models evolving within the industry, this optimization ability will be crucial to organizations long-term survival. We are, in fact, moving in a direction of data-driven healthcare on a global basis. As an added benefit, countries like the USA and UK are offering providers incentives for healthcare IT investment. These subsidies and incentives are an added catalyst that will continue to hurry implementation, and one that offers patient care a healthy outlook as organizations become enabled to use data for improved decision-making, as well as operational and clinical transformation. However, despite rapid investment in EHR’s, only a fraction of a patient’s information is typically available in an electronic format—somewhere in the range of 50% depending on your country of origin. This is mainly attributed to the healthcare industry still being highly paper dependent, with some 38% of documents used remaining paper based (IDC Health Insights, 2013). The incomplete view of a patient’s health record that we are left with can be disastrous, if critical information is missing from the patient record—whether that is anything from medication history to lab results. Fragmentation only prevents the clinician from seeing complete view of patient history, diagnosis and treatment at the point of care, which is why we also have an outlook for the industry that we will see an increase in the implementation of integrated patient record (IPR) solutions. A recent IDC Health Insights white paper explained how IPR solutions can bring together clinical media and medical images with patient-critical paper documents and information. Comprehensive, patient-centric records will allow hospitals to improve outcomes, enhance patient quality and reduce costs. Simply put, IPR solutions are allowing organizations to transform how they view, organize, accesses, manage and use patient information to create efficiencies and optimize care delivery, and this is something we only see becoming more relevant in 2014 and beyond. Check back with us soon as we continue to outline what healthcare information technology innovations will continue to advance and alter the healthcare sector in 2014.  In the meantime read this customer story on EHR.

Read More

I Know It’s Here Somewhere: The Downside of Data Hoarding

There’s a show on TV called Secret Hoarders, where a shamefaced member of the public will open their home to the cameras reveal themselves to be an ‘enthusiastic saver’ of items. The reality is that their house is full of junk because they can’t bring themselves to throw anything away. The gist of each episode is that a psychologist analyses the situation and usually discovers the subject’s behaviour is a direct reaction to some past traumatic event that’s made them believe that, by hanging on to everything, they’re safe and secure. In our professional environment there are hoarders of a different kind. I’m talking about data hoarders, the office pack rats who have a deep-seated need to keep everythingevery brief, every email, every draft of every documenttucked away somewhere in the off-chance they may need it one day. Like their house-bound comrades, these stockpilers are most likely reacting to some distressing event from their past. Maybe a server crashed just as they needed an important document. Maybe they found themselves under the bus as a result of a broken message trail. Maybe they accidently erased vital information that wasn’t backed up. Whatever the reason, they’ve vowed “never again” and now, secure in the knowledge that they have 3TB of possibly relevant company data stashed away on a hard drive, they’re prepared for any eventuality. They may even believe they’re going to be more efficient because of thisand it may have – started out that waybut, left to their own devices, I’m willing to bet their file-naming conventions are now closer to “latest_latest_latest_version.doc” than those specified by a proper information governance platform that includes versioning. And that’s just the beginning of the security and productivity issues: Privacy and security stakeholders should be cringing at the thought of an unsecured repository of undefined enterprise information sitting off on its own outside the firewall. The development of ideas and initiatives will be slowed or even derailed by someone whose own records don’t synch with the rest of team. Ever been part of an approval chain that has multiple versions of the same document circulating? Good times! Remote and mobile access to the data will probably be limited; ironic considering the hoarder probably started their squirrelling in an attempt to have everything at their fingertips. The Doctor Prescribes… Fact is, data hoarding is a learned trait (and, as a side note, one that can manifest itself across entire departments and divisions) driven by lack of trust, accessibility, and structure. The cure is a strong dose of an information governance program that instils these qualities, and additionally ensures a proper security and governance framework. Treatment begins with the formulation of a comprehensive, enterprise-wide information governance policy. Note here that development should involve consultation (I found a medical dictionary online!) with all appropriate specialists; legal, IT, records management, and more. Next, embed a best-in-class Enterprise Content Management (ECM) suite to facilitate the management and administration of the policy. As an additional step, address the symptoms specific to your situation: Is lack of mobile access to vital info a contributor to the hoarding? Relief can be found with ECM Everywhere, an easily integrated solution that provides seamless access to mission-critical information through mobile devices while extending your permission, security, and vigilance policies beyond the firewall. Data hoarding is a treatable condition. It can be un-learned, and the irrational fear of not being able to get to the right information eradicated. If IT departments and system architects offer end users a better way of working, together we can start data hoarders on the road to recovery.

Read More

The Real Risks of Poor Archiving

The massive volume of content and information in organizations is a problem that’s not going away. Without an archiving strategy, businesses face high storage costs, legal and regulatory risks, and incomplete compliance due to inefficient technology. While the costs associated with slow response times and wasted resources are important, what’s more critical from an information management perspective are the acute business risks and liabilities that can impact your organization’s competitiveness and bottom line. A successful archiving program must factor in these four key risks: 1. Keeping too much: Seventeen percent of business leaders say they don’t want to throw any content away—and 14 percent say they don’t have a defined retention strategy, so they just go ahead and save everything, according to the Forrsights Hardware Survey, Q3 2013, from Forrester Research, Inc. The increasing number of compliance requirements and litigation events mean organizations are saving all kinds of enterprise information. But without an archiving strategy, businesses are keeping too much information, leading to exorbitant storage costs and enormous amounts of time lost to sifting through mountains of data. Leading enterprises need to structure, classify, and retain essential business information and official records in a way that makes it easy to access data and respond to audits, legal requests, and electronic investigations. 2. Keeping too little: A defensible deletion strategy ensures the right information is kept for the right amount of time and on the right level of storage, helping organizations manage the lifecycle of data and ensuring the retrieval of complete, accurate records. There’s a fine balance between keeping too much and not keeping enough. As content ages, it needs to transition into an enterprise archive where it can be managed for appropriate access and sharing. 3. Legal compliance: Organizations face a number of complex and evolving compliance requirements. If an eDiscovery or legal hold request can’t be met quickly or thoroughly, it leads to a long, incomplete, and frustrating process involving more people than are needed. Robust archiving solutions incorporate all types of enterprise information from a variety of sources, helping reduce storage and management costs, mitigate legal risks, and ensure compliance. 4. Regulatory liabilities: Every business sector faces compliance regulations, but some—such as financial services, healthcare, and energy—face more than others. Organizations are at serious risk if they do not archive all their electronic content in compliance with regulatory, legal, and industry best practices. Best-in-class enterprises are turning to robust archiving solutions to handle all types of information in one repository, making it easier to manage and less costly to operate. Information becomes simple to find, response times are improved, and the time and costs involved with eDiscovery requests are significantly reduced. Learn more about archiving and discover the four steps to a successful archiving strategy in the white paper, File Archiving: The Next Big Thing or Just Big?

Read More

As Compliancy Audits Increase, Archiving Becomes More Important

The Financial Industry Regulatory Authority (FINRA) handed out a late Christmas present to a London-based bank: a fine of almost $4 million for not having proper records retention. The bank “accidentally” lost years of emails and chats. “Ensuring the integrity, accuracy and accessibility of electronic books and records is essential to a firm’s ability to meet its compliance obligations,” said Brad Bennett, FINRA’s chief of enforcement and executive vice president, in a statement. Especially after the world’s financial crisis, banks are under a great deal of pressure and have to make sure they are compliant to regulatory obligations. One of these is managing their records. With the increasing variety of ways in which transactions are done, the complexity of managing those records will only get bigger. Regulatory authorities must also be aware that record keeping has not been a high priority in the financial industry. In the last 10 to 15 years, banks have made several acquisitions, leaving most of the old back-end systems in place and just replacing a logo, not spending too much on IT innovation or knowing what information is in those systems. Or worse yet, not knowing what information should be there. Emails, chats, tweets, and so on could be part of a transaction and therefore have to be kept—or archived—for the appropriate regulatory period. Those who think the audits done by authorities like FINRA are only happening in the financial industry are misled. Pharmaceutical companies, food and consumer packaged goods manufacturers, and oil and gas organizations to name a few are also subject to these kinds of audits and are still being fined from time to time. Anti-competitive behavior is also around, including in the construction industry, and companies have received significant fines. There is the recent LIBOR cartel scandal, consumer television cartel scandal a few years ago, and recently manufacturers of consumer electronics have been investigated by the European Commission about potential anti-competitive behaviors, just like Microsoft and Google probably are on an ongoing basis. It seems that a trend is happening where governments and regulatory authorities have an increased budget for handing out fines, which will lead to an increase of audits on compliancy. They have found ways to decrease their budget gaps. Even worse, the fines are rapidly being increased to force companies to think twice about not being compliant. Those companies that believe a “let’s keep everything” strategy will help them be compliant are wrong, too. It is not surprising that other compliancy laws are being put in place that prevent certain information being kept longer than 6 to 24 months. Not to mention that keeping everything will slow down business performance. Managing archive has become a topic high on the IT agenda, and where it has not yet become a priority, it will be soon. The volume of electronic records is increasing rapidly, local jurisdictions are forcing companies to keep records on local soil and therefore preventing moving all archive to a public cloud, and now regulatory authorities are increasing the number of audits, making archive a true business challenge. It is a challenge that will be impossible to take on without the right records retention program in place, supported by one archive system that can prove to any auditor defensible deletion and prevent fines. Then the challenge of managing archive will become a great business case. Learn more about Archiving at www.opentext.com/archive.

Read More

Will PRISM and the NSA affect the move to the Cloud?

Many organizations are considering a move to the cloud to take advantage of the inherent IT efficiencies and predictability of costs. Cloud-based archiving – and the more recent trend to migrate corporate email to the cloud – have led the charge for CIOs striving to realize the benefits of off-loading these apps to the cloud. Neither is without risk, and there are implications to be considered when contemplating a move to the cloud. Cloud archiving has been around for years. Concerns historically have been centered around the risks of storing corporate data off-premise; the resiliency and reliability of the applications managing the data in the cloud; and rapid access to data by users (e.g. knowledge workers, records managers, and legal users). Hybrid clouds, improved archiving services, cost savings and operational efficiencies have helped ease these concerns. According to Radicati, 31% of enterprises are relying on cloud archiving today (“Information Archiving Market, 2012-2016”) and that number will continue to rise over time. Recent inclinations of moving email infrastructure to the cloud led by Microsoft Office365 and Google Apps have been brisk, but could be slowed by data privacy and jurisdictional concerns. For instance, Google has long come under fire for the amount of data it collects from its users. The Wall Street Journal recently reported that “Google’s information gathering about Internet users rivals that of any single entity, government or corporate.” The company remains closed on much of its internal data-handling procedures. Google treats data collected from its services as a commercial product that can be exploited at will by any other part of the company. When The Washington Post and the Guardian newspapers broke news about PRISM, the surveillance program of the National Security Agency (NSA) and FBI – and the unprecedented access to customer data provided by Google, Yahoo! and Microsoft (amongst others) – it shed light on the level of granular surveillance activity by the US Federal Government and the level of cooperation given by technology providers. Although the federal government has stated no surveillance has been done without due process, it still raises questions and concerns about how your private data is stored and accessed in the cloud. In addition to this, jurisdictional concerns emerge with the sheer nature of the public cloud. As servers are spread out geographically or regionally to provide quick access, redundancy, and protection, those very features can raise questions with legal ramifications around where corporate data is stored and what legal jurisdiction it falls under. For instance, Microsoft has recently stated that they cannot guarantee that data stored in Office365 will remain on US soil and that those risks are inherent; by no means is Microsoft responsible for or makes any assurances that data remains in the US. In Europe, similar concerns are present and, along with the US Patriot Act (2001), have led to changes in European data protection and privacy laws. For instance, in a landmark ruling, Sweden’s data protection authority (the Swedish Data Inspection Board) has prohibited public sector bodies from using Google Apps cloud service. Data privacy and inter-jurisdictional concerns are important checkpoints when considering acceptable risks for corporations and government bodies when making the move to the cloud. Learn more about cloud archiving solutions at www.OpenText.com/CloudArchive. And don’t forget to check out Part 2: Governing Data in the Cloud next month.

Read More

A bit of Pepperoni, 2MB of Cheese: What do Pizza and Engineering Documentation Have in Common?

engineering documentation

Every single gaming/console aficionado in this world is aware that there’s only one right answer when contacting your local pizza dealer for dinner. And this is: D-E-L-I-V-E-R-Y. You can’t afford the luxury of going out of your home to pick up three family-sized pizzas in the middle of a Halo tournament. You know what would happen: the kill-streak would be gone, the perfect sync between you and your mates would vanish, the mood would be killed, and the magic broken. No sir—it’s too risky. Through the years, you have mastered your skills in using your pizza dealer’s website and you know that you can place that order in less than 14 seconds, one-handed with the iPad on your lap, beverages included. Just never forget to choose DELIVERY and you’ll be safe. You are the Commander and you know you made the right decision. After so many hours of my life spent eating pizza, playing Halo and dealing with engineering documentation, I can’t help but to find similarities with technical considerations in the Engineering and Construction world today. Even though the Internet provides us with very useful tools to access the information we need in our daily work, these activities rely on several factors to work properly, namely: Size of information Bandwidth Connection speed Distance Downloading big files, using a low bandwidth, radio-based connection, and from another continent can be a very time consuming, frustrating and sometimes, an impossible task. It is very common for Engineering and Construction companies to build facilities across the planet close to where actual natural resources are located. Unfortunately, the professionals working at these facilities or construction sites face connection issues every day, as they work thousands of miles away from their colleagues in the office. I experienced this first hand. My first job was as developer/administrator for a Spanish Engineering, Procurement and Construction (EPC) company. At that time, the EPC was developing a project for a Spanish Oil & Gas company to revamp a refinery in La Pampilla, Peru. Back then, our Internet connection was a fraction of the speed achievable nowadays. The bandwidth available in La Pampilla also contributed to the bottleneck. It took hours for personnel in Peru to download a drawing from the repository in Spain. We did everything humanly possible to improve the performance, but even after all that work, the results were not stellar. They still had to click on the documents they wanted, and wait for it to download. Putting it in pizza terms, waiting for documents to download interrupts your activities and is the equivalent of choosing “pick-up” when ordering your pizza. How can you switch to a DELIVERY model with your engineering documentation? The answer is easy: caching. In this scenario, think of a cache as a remote bucket of files with a little intelligence. Instead of waiting for a document to travel from Spain to Peru when it’s requested, it is possible to have it delivered—cached in advance. With caching, professionals in Peru can quickly browse through the documents in a web application hosted in Spain. When they select a document, it is quickly served from the cache in Peru without delay. The documents have been automatically copied overnight, based on rules set up by the project administrators. In a previous blog, Three Ways to Leverage your EDMS in Engineering, I talked about the critical importance of document metadata, and how it helps you to take actions and make decisions based on the value and features within your documents. This is the perfect example of how to use metadata, in this case “Approved for Construction”, to identify which project documents need to be cached to the remote site. Documentum Branch Office Caching Services (BOCS) performs this and other tasks automatically for you. Because of its simplicity and usefulness, many companies around the world use it to successfully address issues of low bandwidth to remote sites. Comments? Questions? Write your comments below—but my pizza has just arrived and the tournament is about to begin. I will respond back soon. Buon appetito, Master Chief!

Read More

The Need for Modern Electronic Archiving Solutions

With the increasing prevalence of smart devices in this fast-moving digital world, customers demand all information available whenever and wherever they choose. Whether it’s a credit card or savings account statement, pension or insurance documents, they want it at their fingertips Today, companies are looking for cost-effective repository solutions which can store virtually all document types and make them available to customers electronically with strict security and audit measures. These solutions will store documents only once and retrieve them in multiple formats (e.g. AFP for printing, PDF/Accessible PDF for e-presentment). Organizations also want a modern repository that is easy to deploy with minimum production in order to achieve maximum ROI in less time. Moreover, many companies are looking for an archiving solution which can replace their existing legacy ECM system or work in conjunction with their existing archiving systems to fulfill their new requirements. It is clear, that, when evaluating repository solutions, organizations take into account many factors, including business requirements, cost, and expected ROI, but technical characteristics of the solution and its performance remain the key for decision makers. Below are questions that Enterprise IT Managers often ask when assessing the capabilities of the Repository solution. These questions should be considered when comparing solutions, as they provide insight into how well the repository will perform and suit modern needs. Q. Can the Repository store print streams like AFP and Metacode? A repository must have the capability to store any print stream documents, including AFP, Metacode, PCL, TIFF, PDF, PDF/A and Accessible PDF formats. Additionally, the repository must have the capability to reduce storage requirements of PDF documents by using Document Storage Reduction technology. Q. Can the Repository be installed on Mainframe? What other platforms should it support? A modern repository can be installed on Mainframe and supports all major platforms like Windows, UNIX, AIX and all the flavors of Linux. Q. Can the Repository be deployed on the Cloud? The repository must have the capability to be deployed on any Cloud infrastructure. Q. Can the Repository store MS Office documents like Word, Excel, or Powerpoint? Can it also store other document types like HTML, or XML? A modern repository must support storage of virtually any document type (see Figure 1, demonstrating a common Repository interface screen). For commonly used formats like MS Office Word, Excel, Powerpoint, XML and others, it must have the capability to parse the document to extract metadata and use it as index values for the respective documents (Figure 2). Q. Can the Repository store customer emails? Can you analyze the emails in the Repository? A modern repository solution can store emails in their original format. The repository should also have the capability to parse emails for basic information like email address, subject, from, to, CC and other fields. In addition, the repository also needs to include an Analytics plugin to analyze emails and enable targeted marketing messages to customers. Q. Can I index the documents when loading into the Repository? Is there a facility to edit the index details later? A modern repository can index the documents either using metadata of the document or by extracting the main attributes (like account number, customer id, credit card number, etc.) in the content (Figure 3) and using them as index values. It also allows editing indexes after the loading process has been completed (Figure 4). Q. Can I search for the documents in the Repository? Yes, the repository must provide multiple interfaces to search documents in various application groups. This provides the capability to search on all index fields. It also enables customers to use plain SQL type operators like (=) equals, (>) greater than, (<) less than, and percent (%) (Figure 6). Q. Is it possible to access documents from legacy archives? Can I have a federated search from multiple ECM instances? A modern repository provides multiple connectors for major ECM providers like IBM CMOD, IBM File Net P8, EMC Documentum, IBM File Net Image Services, and Microsoft SharePoint (Figure 7). It must also connect to any ECM system (as long as it supports CMIS framework) using CMIS (Content Management Interoperability Service) Adapter. The repository can also create a federated layer to search documents from multiple ECM instances. Q. Are there any restrictions on volumes or number of documents to be stored? Can the Repository support billions of documents? A modern repository must be tailored to support high volume transactional output (HVTO). The solution should not only store high volume documents, but also process high volume transactions with ease. Q. Does the Repository support COLD archiving? What other storage technologies does it support? The repository must support COLD archiving. It provides a logical physical layer for storage and it is also agnostic of the storage medium used. With just a couple of configuration changes (Figure 8), it is possible to switch from one storage medium to another. The Repository supports multiple storage mediums like Hadoop , Centera etc. Q. Does the Repository convert print stream documents to PDF? Yes, the repository must have the capability to transform print stream documents to PDF on-the-fly whenever customers want to view the documents. Q. Does the Repository support Accessible PDF documents which would be suitable for individuals with disabilities, including the visually impaired? Does it comply with accessibility regulations? Yes. A modern repository must have the capability to to convert print stream documents or existing PDFs into Accessible PDF documents (Figure 9). Look for vendors that partner with multiple industry organizations in different regions across the globe to ensure that all Accessibility requirements are met. If you would like to learn more about Actuate’s Repository solution, download the Repository Datasheet or contact Actuate.

Read More

What Does Innovation Mean in the Health Sector?

patient data

There are a few considerations around patient data and innovation based on a recent IDC study about Transforming Health. First, as the report details, despite vastly different infrastructure maturity levels, hospitals, regions, and countries embrace innovation as the primary path forward in healthcare. It’s not just cost cutting anymore. It’s transforming a patient’s experience, which in turn ends up driving out expenses. From my perspective, this is exciting. I see how technical innovation is being implemented, in different areas across EMEA. In Finland, for example, the choices in how to implement data and system connectivity among healthcare entities led to a very positive outcome. Allowing hospitals to select which applications they use, while tying each back to a central national patient archive repository that’s standards-based (e.g. HL7, HDS), has been key to their success. Hospital administrators use application interfaces they’re comfortable with. Doctors use applications they can trust. Yet all tie together to deliver the patient a single view of their medical records. Whether in Northern Finland at a clinic, or visiting a healthcare specialist, patient data throughout the country is consistent. This has resulted in a secure, appropriate, compliant way to access and share patient information. In the UK, the choice to force a technology decision upon healthcare entities is part of what led to its backslide in patient records management. Once faced with unfamiliar interfaces, functionalities they didn’t ask for, or specifications incompatible with all their other IT systems, health care entities resisted. And if patient records are not shared across all the stakeholders, instead of drug allergy information that follows the patient to the emergency room and to his or her post-care provider, such content remains disparate, siloed. The patient experience is one of the paper keeper, carrying records from provider to provider. The major side effect? Kicking off repeat workflows at each individual provider, dooming productivity, and introducing mistakes that, at their worst, cost lives. But what does innovation mean in IT health systems transformation? Well, we see that decoupling data from the application that generated them can change how quickly and easily a health care provider can migrate to newer or different technologies. If hospitals keep their best of breed applications, yet tie patient data into a standards-based, centralized repositories it is possible to provide the best level of service at that point in time. They are not mired in different migration cycles for different applications, especially legacy applications. Patient data is “shared” by the latest hospital to join the network, or the coolest wearable diagnostics tool at a patient home. Essentially, integrating patient information in a secure manner provides the connective tissue for collaboration across the health value chain and this just “accumulates” innovation moving forward. And, avoiding repeated legacy migration costs frees up more resources to innovate even further across different dimensions of patient experience. Insurance providers can offer incentives to patients who follow all of their preventative measures, for example. Insights extracted from integrated patient records will be valuable in a wide variety of use cases, from hospital administrative processes, to clinical decision support, to collaboration of university hospitals with life-science research organizations. This list of innovations is endless, but it all starts and ends with patient information architecture, and how you manage patient data.

Read More

Trouble Finding Your Car Keys? Too Often, the Same Goes for Engineering Content

Engineering content

For anyone who has ever left the house without car keys, a smart phone, or that critical pair of glasses, the concept of content standardization should be a welcome topic. By calling the same topic the same name across all files and hierarchies, chances are you’ll more readily locate what you’re looking for. Let’s start with the basics. In the Energy & Engineering space, standardization can mean calling Power Plant 1 always Power Plant 1, and not PP1, PwrPlnt1, or Henry’s Favorite Power Plant. It also means developing a taxonomy that helps address those times when you will, as a human, walk out the door without your keys. For an oil rig, for example, content management standards can help simplify the management of thousands of pieces of equipment. These can be valves or pumps, and each piece has its own tag number, maintenance schedule and technical specifications. If something breaks, the operator manual becomes a highly valuable piece of content that must be found quickly. Just as important, anyone searching for it must know which terms to look for (“plant”, “facility” or “asset”?), and must access the latest approved content. As we’ve covered in other blogs, picking up the wrong revision of a document could mean drilling a hole in the wrong place, or the instigation of a serious disaster. Nowadays, things get even easier. While historically Energy & Engineering companies kept complete paper user manuals, the advent of improved networking, intranets, and the Internet has brought in the just-in-time alternative. The massive manual process on paper, even among companies with rigid processes, always lagged behind. Today, documents are instantly available online, and with proper access control and security, keep critical content fresh, accurate and readily available. But immediate access to all online content can be overwhelming. This is where standardization, metadata, taxonomies, and search become critical. Without an effective strategy for organizing your content, finding a maintenance schedule for Power Plant remains a challenge. And if you find more than one maintenance schedule, ensuring that you use the right version becomes the next hurdle. What new things can you do to drive operational efficiencies or safety? I see the benefits of new standards like PAS 55 (ISO 55000, specification for the optimized management of physical assets), NORSOK (Norway’s standards for petroleum industry developments and operations), and ISO 14224 (standard maintenance and reliability data format). It’s critical your early efforts start with data compliant standards. Otherwise, not only will you keep losing your car keys, but you’ll never be able to hand them to the valet when you’re tired of looking for parking spots. The burst in data that Energy & Engineering companies need to keep – electrical, plumbing, piping, structural, etc. – only increases the requirements for rigorous organization and a sound content management strategy. Using standards from the beginning means not only a smooth start, but the potential for innovation down the line. Are you using standards to help organize your engineering content and physical asset documentation? Why or why not? Post your comments below.

Read More

Daegis Streamlines Archive Collections for eDiscovery

Daegis AXS-One archive now delivers eDiscovery-ready custodian-level collections.  New functionality in the Daegis AXS-One Case Manager module enables organizations to more quickly and easily identify and collect archive data responsive to litigation or investigations. Using powerful search tools built into the archive user interface, clients can find, tag and collect data subject to a hold order. At the time of extraction, the archive collects and packages this data by custodian for loading into the hosted eDiscovery platform, Daegis Edge, without requiring further manipulation of the data. Using a hosted review platform for eDiscovery frees organizations from maintaining in-house software while giving them complete control of the review and production processes and of their data. Daegis streamlines the process of identifying responsive data in the archive and makes it readily available for eDiscovery without extra steps or moving the data through various software tools before it can be reviewed. Organizations have complete control of the discovery process from start to finish using the sophisticated case management tools built into the AXS-One archive, releasing only the data that is needed for a matter or investigation. The combination of the Daegis AXS-One Archive and the Daegis Edge eDiscovery platform provides organizations with purpose-built tools that safeguard their data in order to effectively manage information and respond to investigations and/or litigation.  

Read More

Embracing Accessible PDF Documents: Key Learnings from the Accessibility Seminar in Toronto

First published on ECM Trends from the Field. We were joined by approximately 15 people in the Ontario Accessibility community earlier this autumn at an exclusive seminar we hosted at the Renaissance Hotel in Downtown Toronto. The presenters were Thomas Logan, Senior Accessibility Consultant from SSB Bart Group, Shannon Kelly – Accessibility SME from Actuate, Lou Fioritto, CEO of Braille Works International, as well as Jeff Williams, Director of Product Management, and Will Davis, Manager NA Presales, of Actuate. The presentations provided in-depth insight into the document accessibility problems facing organizations in Ontario in particular. We got some great content, which is, of course, tagged for accessibility. All presentations from the Seminar are available here. A few ideas really crystallized for us in this session. Shannon Kelly’s presentation – co-presented with Lou Fioritto, who is himself blind since birth – delivered a real-life experience using AT or Assistive Technology. Lou was able to give us a side-by-side narrative of using both incorrectly and correctly tagged PDF files. It was a real ear-opener! People with impairments use a screen reader such as JAWS to deftly navigate websites and PDF files and Lou showed how utterly frustrating it can be to attempt to work through a PDF bank statement with no headers, vaguely tagged graphics and tables with column headers but no row headers. Lou even commented that as a user, if you had to endure this, you’d just give up and get your information from another source! Shannon Kelly Accessibility SME Actuate Lou Fioritto Co-owner and Vice-President BrailleWorks This begs the question about the perception of PDF files within the visually/cognitively impaired community. Correctly tagged PDF files can now be created at their source and remediated with automation at high speed (see Figure 1 below). However, do decades of bad PDF with errant or no tags spoil it for today’s better PDFs? We suspect this will be debated in the coming months, but we are struck that HVTO (High Volume Transaction Output) content and the visually impaired community are just now beginning to intersect, some would say collide, at high speed. What’s compelling about PDF in HVTO is that it is a correct snapshot of a transaction oriented document with all the elements that accompanied that snapshot when it was created (i.e. logos, offers, signatures). PDF has become a de facto checkbox for those needing to comply with regulations that mandate multi-year retention of exact replica artifacts that can be produced in court. Ever hear of an insurance company coming to court with HTML representations of an insured’s date-stamped renewal notice? Nope, it’s all about PDF when the lawyers are involved. So, the thinking goes: “PDFs are not going away, so why not work with PDFs rather than other, less-structured, less-accepted formats?” Another idea that kind of blew us away was the immediacy of impending legal mandates, and for this we have to thank SSB Bart and some attendees from a top Canadian Financial Institution. First, in the US, both Title III of the ADA AND Section 508 are expected to be updated. Title III may be explicitly calling out websites as places of accommodation. Section 508 will likely point to and embrace WCAG 2.0. Both of these changes are expected by Spring 2014. In Ontario, the AODA states that if you host content on your website, the content must be accessible by January 2014, four short weeks from now. We’d like to thank our presenters and attendees. As usual, I think we learned as much at our seminar from the attendees as the other way around. It was an impressive show of community-smarts, and we were glad just to be associated. It was so successful, Actuate has decided to take the show on the road. The next installment will be held in Charlotte, NC on January 23, 2014 – check out the agenda and let us know if you are interested in taking part!

Read More

Introducing xECM for Oracle E-Business Suite

What an exciting week at Enterprise World as we announce so many new innovations! One of the great new ECM focused product offerings being announced is OpenText Extended ECM for Oracle® E-Business Suite. This new product brings together OpenText industry leading ECM with Oracle’s best-selling ERP system, E-Business Suite to drive productivity and governance within critical ERP-driven business processes. What does it do? Within Oracle E-business Suite, users work on business activities – processes driven by their ERP system. Some of the information they need lives outside the ERP environment – things like documents, spreadsheets, and mail are all information the ERP users need to complete their work. Extended ECM for Oracle® E-Business Suite (xECM for Oracle) creates a connection between the ERP system and the ECM system within the organization so that relevant content in the OpenText Content Suite can be viewed and worked on within the familiar E-Business Suite interfaces in the context of the process. This makes work easier, faster and more accurate for the E-Business users. Some of the people that work with ERP business processes are only occasional Oracle users. These people spend most of their day in their Office applications, email or within other common ECM environments. xECM for Oracle makes it easy for them to work on ERP driven processes by bringing all of the objects, relationships, and content together within their familiar Content Suite interfaces such as MS office, Outlook, SharePoint, web, and mobile. Everything they need is at their fingertips without needing to learn a new system. Content both inside and outside the ERP system needs to be governed consistently, ensuring compliance to regulations, protecting the content, keeping it as required for compliance, ensuring it can be discovered during investigation, and deleting it when allowed or mandated to do so. Because xECM for Oracle is built on the foundation of OpenText Content Suite , this information governance and security is built in. Content such as HR documents, purchase agreements, and contracts are managed through their lifecycle with the same consistency and rigor as the associated email, documents, and CAD diagrams. Organizations can ensure their regulations and policies are adhered to regardless of the source of the information. Extended ECM for Oracle® E-Business Suite from OpenText How is it built? OpenText are experts at integration between ERP and ECM. Many organizations benefit from the depth and breadth of offerings we have that extend the ECM platform to provide integration and specific solutions with SAP systems. This experience has been used to create a similar foundational integration between Oracle E-Business Suite and OpenText Content Suite . Integration is built as core extensions to the Content Suite Platform, tied directly into the foundation of Content Server so that administration and configuration of the ECM system automatically applies to the E-Business Suite components. Within the Oracle system, standard E-Business APIs are used so that ECM components are natural extensions to the infrastructure and common UI’s. The beautiful thing about the way this application is designed is that it works for all Oracle E-Business Suite processes within the Oracle HTML and forms based interfaces, and within standard ECM interfaces such as MS Office, SharePoint, web, and mobile. Let’s take the example of a project so we can walk through how the integration works. When the project is created in E-Business Suite, xECM for Oracle creates a workspace for the business objects within the project. The workspace shows the folders, structure, and relationships between objects. It reflects the users that have access, the metadata associated with the project and the content. This business workspace is made available to users within both the ERP and ECM systems. Transactional content is added within the Oracle system, unstructured content is added within the ECM system. All content is visible to users in their environment so they can work where they are comfortable. When users are given access to objects within the process, they are automatically given access to the workspace. Metadata and classifications associated with the project are automatically applied to all the related content. Information governance is provided on all of the content from both systems ensuring compliance and risk reduction for the organization without adding extra work for users. Costs are kept down due to archiving of content into the OpenText archive where content is compressed, de-duplicated and stored on appropriate media on premise, in the cloud or both. What are the key benefits? · Productivity gains through users getting consolidated access to ALL information surrounding the transaction · Increased user adoption as users work where they are comfortable · Increased accuracy because there is no lookup and manual copying of information from one system to another · Cost and risk reduction through information governance and archival · Compliance with regulation and corporate policy What is next? Extended ECM for Oracle® E-business Suite officially releases in March 2014. Expect to see future releases of this product with added functionality, solution blueprints, and perhaps packaged applications for critical processes such as project management. In future there may be similar integrations to other Oracle ERP systems. Learn more! There is more to see and learn about this exciting new offering from OpenText. Visit www.opentext.com/oracle for further information. To learn more about the release of Content Suite, visit www.opentext.com/content . Follow and join in our Twitter traffic this week with hashtags #OTEW2013 and #OracleECM.

Read More

Introducing the Discovery Suite

The Rise of Big Content and Big Value Whenever I talk to customers about Information Governance, we talk about the technology and processes that help to balance the risk, cost and value in the huge volume of information we create and accumulate. Inevitably however, we end up talking more about risks and cost. We talk about costs because they are tangible and measurable. We talk about risks because of past experience, or events that happen to like-organizations. Value on the other hand is typically acknowledged as a goal, but then it often fades into the background amongst the many challenges facing organizations as they take on Information Governance. Why is the value of our information always secondary to the risks and costs? I think there are three prime reasons: 1. It is very difficult to quantify value – and especially put a return on investment on value 2. Due to the size and growth on unstructured content, the risks and costs are overshadowing the value conversation 3. Nobody really knows how to go about getting value out all the unstructured content we create and store due to the complexity of sources, formats and sheer volume A New Way to Look at Unstructured Content If there is one thing that the Snowden affair has done, it has been to raise the public consciousness of the power of Big Data. Big Data is primarily about capturing the electronic wake of our digital lives found in logs, click streams and transactions to analyze and obtain valuable information from these exceptionally large data sets. Big Content is not the wake of our activities, but the byproduct of the knowledge economy. It is the huge volume of freeform, unstructured information that organizations are creating and storing. It is our email, documents, spreadsheets, and media being created on the desktop, on mobile devices, and increasingly, in the cloud. Like the “3V’s” of Big Data, Big Content has significant challenges that need to be addressed in order to be useful. Big Content is Unintegrated, Unstructured, and Unmanaged. In order exploit this information and gain insight from this information, we need to solve the “3U’s” for Big Content Announcing the OpenText Discovery Suite – Solutions for Big Content This week at Enterprise World, our annual user conference, we announced the coming availability of the as part of our upcoming release. The Discovery Suite has been specially created to address the problems of Big Content. Integrating the Unintegrated: Instead of getting smaller, the list of systems that host unstructured information is growing in most organizations. This, combined with a perception at the end user level that there is no cost to hoarding information, means that unstructured content is found in every nook and cranny – including fileservers, archives, email systems, ECM, ERP and increasingly in social media applications and the cloud. The OpenText Discovery Suite eliminates these silos with a Unified Information Access Platform for enterprise sources and content. The Discovery Platform has connectors for critical Enterprise applications and it processes and indexes documents found in the enterprise. It also provides a rich library of UI components and APIs so that developers can embed search directly into line-of-business applications. Bringing Structure to the Unstructured: Normalizing and enriching content brings structure and consistency to otherwise freeform content. In order to have the confidence to act on information, whether that be delete, migrate, or reuse content, it is necessary to understand and locate the content. Metadata that is added to unstructured information can be used for search, for reporting, for faceting, and to visualize the information. OpenText Discovery Suite uses OpenText Content Analytics to bring structure to the unstructured. It extracts semantic information from content and applies it as structured metadata. Semantic metadata includes people, places, organizations, complex concepts, and even sentiment. This, combined with file metadata and permissions metadata, provides a way to query and organize unstructured information in ways not possible before. Managing the Unmanaged: Big Content, more than anything else, is unmanaged. We need to first manage the content so that we can differentiate between the valueless content and the content that is worth keeping. Information Governance and defensible disposition can be used to reduce the risk and cost of the content found to have no value. By enriching and making the content more findable, valuable content becomes more accessible, more valuable, and leads to improvements in productivity and engagement. OpenText Discovery Suite has applications to manage specific Big Content problems that organizations struggle to solve every day. Search and content analytics alone do no solve business problems. Business logic, roles, reporting, and much more needs to be built on top of the platform in order to support the use case and provide a clear return on investment. Some of the Information Governance use cases include auto-classification of content, collection, and early case assessment for eDiscovery, remediation, and migration to control the lifecycle of content. Some of the engagement and productivity use cases include site search, intranet, and extranet search applications. A Roadmap for Innovation The release of the Discovery Suite in April of 2014 is a significant milestone in the in the roadmap for search and content analytics at OpenText. The R&D group, which owes its roots to the acquisition of Nstein, started with an industry leading Content Analytics solution and built upon that to deliver the Discovery Platform, InfoFusion, the industries only Auto-Classification solution with built-in defensibility, as well as the innovative site search application Semantic Navigation. The innovation continues with the release of the Discovery Suite. At Enterprise World we are providing sneak peeks of two new applications that will be included in the Discovery Suite. Both these applications are great examples of the power of integration in EIM. The first is the integration of the Discovery Suite and the Experience Suite. InfoFusion will provide semantic search for the Experience Suite, significantly differentiating the search experience. The second is an integration with the Content Suite, where we extend lifecycle control of content outside of the Content Suite. Most critically, it will allow organizations to light up the dark content that lives in Fileshares and legacy application. Based on an understanding of the content, organizations can delete content, leave it in place, or migrate the content to the Content Suite. The Way Forward – Making Big Content Valuable In solving the “3U’s”, the Discovery Suite encapsulates the processes required to identify value in Big Content. First, in order to derive value from Big Content, we must first determine what is not valuable. We need to find and deal with this content using defensible disposition policies. Discovery applications like Auto-Classification can proactively or reactively assess content against the organization’s policies to identify business relevant content, records and transitory content. Once the valueless content is being managed, it is possible to focus on securing and managing the valuable content. Discovery Application can move content from unsecure locations like fileservers to OpenText ECM for ongoing retention and archiving. Finally, the value of the content can be amplified. Enriched content provides better access, greater productivity, increased collaboration, and content reuse. In developing a repeatable process for garnering and increasing the value of content, we actually address the other two issues. The first step in the process is separating the valueless content, which is where a huge amount of the risk and cost resides in Big Content. Doing this is going to make the IT and Legal teams happy. Lastly, by making content more accessible, more findable, it is going to be easier to see where Big Value is being realized. Discovery Suite accelerate Time to Value from OpenText

Read More

Prepared for an Unannounced Regulatory Inspection?

regulatory inspection

Don’t Get Caught Off-Guard! At any moment a regulatory inspector may walk to your door and ask for an inspection RIGHT NOW. Regulatory compliance is of paramount importance to pharmaceutical and biopharmaceutical companies, who must be continually prepared for inspections by the FDA, EMA, and other health authorities to meet requirements for good clinical practices (GCP) and good manufacturing practices (GMP). Successful GCP/GMP inspections can be achieved only through well-controlled laboratory, manufacturing, and quality systems with efficient access to related documentation and information. Successful inspections require the infusion of controlled processes into its enterprise content management (ECM) systems, business processes and culture. Why is this is such a hot topic? Anytime the agency makes an inspection and issues findings (e.g FDA Form 483 in U.S.), sponsors are required to change their processes and systems to become compliant within a prescribed schedule. If the sponsor fails to meet the objectives of the 483, then strong measures are applied with legal implications. At that point, organizations will have full executive attention and there will be no hotter topic! Burning Business Implications of an Unannounced Inspection Unannounced inspections mean the business unit cannot do any last minute adjustments or unique paper-based manual tasks of copying documents and setting up of a review room. Unannounced inspections mean the business unit must be prepared at all times and electronic source record systems are the only way to do this. In addition to unannounced inspections, inspectors are also now asking for remote inspections, which means that electronic source systems must be clean and available to users that will be off site, so the business unit cannot control or influence the inspectors approach to reviewing the files. The systems must be prepared for inspector access and the business unit cannot even enter last minute missing documents. How prepared are you for unannounced inspections?

Read More

What is Your Biggest ECM Challenge?

About one year ago, in a 4-week Twitter survey, we asked our audience the question “What is the biggest challenge in ECM today?” We received hundreds of responses from professionals in various roles – from IT architects to line-of-business managers and information industry consultants. The survey revealed that modern Enterprise Content Management (ECM) systems and processes are so complex that people are absolutely willing to take time out of their day to share and discuss numerous ECM challenges on social media. The responses ranged in sentiment from pure frustration (“nobody cares”) to active engagement (“building the right team that knows what they are doing!”), and were related to both business (“lack of budget”, “silo policies”) and technical challenges (“content classification”, “security”). Just look at the image above – all of these were brought up by our participants last year! When we summarized the results of the survey, it became clear once again – ECM is an area that any organization as a whole cares deeply about, but when it comes to specific issues – organizations often find it hard to take proper action to improve their ECM systems and processes. The “winning challenge” summarized the prevalent opinion of the participants – and at the same time referred to the concept of complexity of ECM: “The Biggest Challenge in ECM today is User Comprehension and Adoption”. This year, we are reaching out to information industry professionals again, with this very straightforward question: What is your biggest ECM challenge? We can’t wait to find out if any challenges have been addressed since last year. After all, one of the responses to our survey was “too many vendors, all leaders”, and if this is true – these vendors should be working hard to help organizations solve their ECM challenges. Join our conversation – leave your responses below.

Read More