CTMS on a Tablet? Windows 8 Will Make it Happen

 

After spending a few days at the recent Microsoft-sponsored Life Sciences Innovation Forum, I can say without exaggeration that the upcoming launch of Windows 8 will bring amazing advancements in the way clinical research is conducted.

Many of these changes revolve around Windows 8’s ability to run on tablet computers.   Just think of the possibilities:  the power of Microsoft Office applications such as Excel, Word, Access, and Outlook in the convenience of a tablet.  You can be completely mobile without giving up any functionality at all!

How this will impact clinical trials became clear in a panel discussion featuring representatives of Vistakon (Johnson and Johnson’s vision care division), the Harvard Clinical Research Institute (HCRI), and the medical device company C.R. Bard.  These industry leaders discussed the success they achieved utilizing OnPoint, the global clinical trial management system (CTMS) produced by Microsoft partner BioClinica.  OnPoint utilizes Microsoft SharePoint and Office applications to efficiently access, share, and analyze operational trial data.

Each representative discussed the specific challenges OnPoint helps them overcome.  Vistakon requires a tool that is flexible enough to be used across many divisions while providing cross-study reporting.  HCRI needs a system that provides metrics on not only clinical data, but all aspects of trial data.  During the vendor selection process C.R. Bard realized that a “one size fits all” CTMS wouldn’t work for them and utilized OnPoint for its customization features.

Peter Benton, BioClinica’s President of eClinical Solutions and a panel participant, summed it up nicely when he said, “OnPoint is the most powerful, easy to use CTMS in the industry.  From the largest to the smallest pharmaceutical, biotech, and medical device companies, from CROs to AROs, OnPoint is the key to conducting clinical trials efficiently and on-budget.”

In the very near future, a Windows tablet will literally put the power of SharePoint and OnPoint CTMS in the palm of your hand.  Will this usher in a whole new level of trial efficiency and cost reduction?  Watch this space!

Update June 4, 2012: See here for a related and relevant article: http://blogs.msdn.com/b/healthblog/archive/2012/06/04/data-entry-far-easier-for-clinicians-using-windows-tablets.aspx

Compiled List of SharePoint Resources

Many customers in the process of rolling out SharePoint across the organization have recurring questions about the best sources for training and tools.  Below is a compiled list of some of the available resources:

Compliance, Governance and Administration:

Utilities and Add-ons:

Legacy Migration, Integration and Content Governance:

So what is the big deal about Regulated Document Management?

There has been so much discussion about this topic lately that I feel compelled to write about it.  We have so many customers asking: can SharePoint be validated?  The answer is a resounding YES!  My colleague Les Jordan has written about this extensively in his Blog.  We even have a Guidance on Configuring SharePoint for Part 11 Compliance and there is also a very useful recorded Webcast that our partner NextDocs has made available recently: http://www.nextdocs.com/en-us/Pages/Validation-Strategies-for-SharePoint-Solutions.aspx.  There is so much FUD spread by our competitors about SharePoint, which we need to address again and again.  Obviously, it is in their best interest to maintain the status quo, and for their customers to keep on paying for their gold-plated expensive and complex legacy systems.  There is a whole generation of Informatics people whose career was built on these systems, and reason and common sense has often given way to ‘religious debates’ about repositories.  I have spoken at the DIA EDM Conference for the last two years about the topic, and to some people I may sound like a heretic when I say ‘It is not about the repository!’.  And then I usually add ‘It is more about the Metadata’ (see my other Blog postings about this).  The fact of the matter is that legacy document management systems were initially designed to overcome the limitations of file systems, i.e. the lack of version control, metadata, object-level permissions, audit trails etc.  And then when the FDA’s guidance on 21 CFR Part 11 was published in March 2000, companies paid through the nose to upgrade their document management systems to be 21 CFR Part 11 compliant.  This was a really big deal to them, and cost millions.  Of course it was a big deal, because these systems store all the critical content that a pharmaceutical company has to send to the FDA to get their drugs approved.  This is their lifeline, and companies were willing to pay any amount to be compliant, and not to delay the approval of their drug by even a single day (a single day of delay could mean millions in lost revenue).

But despite all this, they are still using these ‘glorified file systems’.  I do not mean to trivialize the importance of document management, because file systems are clearly not suitable for compliant applications.  However, there is no reason why these legacy document management systems should be so complex and expensive!  When I came to work for Microsoft, I was really excited by the power of SharePoint as a platform.  I saw huge potential to build then next-generation document management systems on the SharePoint platform.  So one of the first things I set out was to write a White Paper called Enterprise Content Management in Regulated Industries, so we could establish our vision.  Here is also another excellent White Paper on Compliance: Compliance Features in the 2007 Microsoft Office System.  The next step was to realize our vision, and to recruit partners to develop SharePoint-based solutions for 21 CFR Part 11 applications.  When we first demoed our solutions back in 2007, even the analysts started taking notice.  Today, we have several large pharmaceutical companies who have already validated SharePoint on their own, or via some of our systems integrator (SI) partners.  See the recently published Case Study about Roche Diagnostics, where they replaced Documentum with SharePoint for all validated IT documentation.  We recently released another Case Study on Affymetrix, who also replaced Documentum with SharePoint.

I also need to address some misconceptions around validation.  First of all, there is no such thing as FDA validated software.  Any software vendor who states is showing their ignorance.  Validation is the responsibility of the customer.  And it is not the software alone that needs to be validated, but the whole environment, which includes hardware, software and even internal processes.  And then there is the question of why one would want to validate SharePoint itself.  For sure, SharePoint needs to be a ‘validatable’ platform, which it is.  But instead of validating all SharePoint, only the application that runs on SharePoint for a particular GxP application needs to be validated.  This means that the application needs to be built, and then validated.

However, I am strongly opposed to building one-off applications for several reasons.  I have found that in over 95% of the cases, pharmaceutical customers need the same kind of capabilities.  Therefore, why not use off-the-shelf applications, and configure them?  I always recommend considering this approach first!  This way, the costs of building the application, and developing the validation test scripts and protocols are amortized over many customers, and the overall costs are far less.  Unfortunately, many companies who are still used to the old ways of doing things still don’t seem to understand this.  There are several partners who have built off-the-shelf or ready-to-deploy SharePoint-based solutions for 21 CFR Part 11 compliance: NextDocs, OrniPoint, Qumas, FirstPoint by CSC Life Sciences, Montrium, Court Square, GxPi, and several additional solutions along nicely.

Among the off-the-shelf solutions, NextDocs has been gaining a tremendous amount of market momentum, and they have done a superb job with their applications.  I love to see people’s faces when they get a demo of the solution and they compare it with the old legacy systems they are struggling with, and get really excited.  It really brings out the best of SharePoint.  I also love their slogan ‘Compliance without the complexity’ – it is spot on!  They have just posted a series of recorded Webcasts on their Site – they are great!  I know that we also have to be realistic, because these legacy highly customized systems are so deeply embedded within corporations that they cannot be just ripped out and replaced overnight.  That is in nobody’s interest, and way too disruptive.  However, there are many GxP applications where people have a real need for such solutions, and are still doing everything on paper because they just cannot afford these gold-plated ancient solutions.  Validation alone for these legacy solutions could take 6-9 months and up to 7 figures, whereas we have several cases where one of the above off-the-shelf solutions had been installed and validated within a manner of weeks!  AS CIO’s are under increasing pressure to cut costs, there are no more sacred cows, and they will be looking at every single legacy system they can replace, and save millions in the process.  I know of several major pharmaceutical companies where they have spent in excess of $50 million just upgrading and consolidating their legacy ECM systems.  A brand new implementation of a SharePoint-based ECM system (including migration, or integration) would be a small fraction of this.  And now they have just locked themselves in for the next 5-10 years, and are at the mercy of armies of consultants to keep these monolithic behemoths running, and integrated with their other enterprise systems.  Someone with decision-making powers needs to stand up, and say ‘stop this insanity’!

And now I need to address the issue of scalability.  The legacy vendors are spreading FUD that SharePoint is not enterprise-ready and scalable.  As an example, see here for some information about Pfizer’s implementation of SharePointGlaxoSmithKline has announced that they are rolling out SharePoint Online to over 100,000 users.  BMS runs on SharePoint, and so is the Univadis Site that Merck has launched.  The U.S. Air Force operates what is probably largest Extranet in the world with over 750,000 users – built on SharePoint.

I have also posted some documents here about SharePoint scalability.  The results are amazing: scalability up to 2 million users!  And here are the latest results of the SharePoint Server 2010 performance and capacity test results and recommendations.  I doubt if any legacy ECM system can produce similar results, when all the parameters are compared.  The main point here is that like with any other system, it has to be architected right, and deployed in the right manner!

You are wasting time. Find out why – The cost of ineffective search

I have briefly written about this topic in earlier Blog postings, but I wanted to elaborate a bit more on the topic.  Here is an article of fundamental importance that I have kept as reference material over the years: http://www.networkworld.com/news/2007/012307-wasted-searches.html

I think Susan Feldman at IDC is one of the leading thinkers in this area, and I completely agree with her views.  Simply put: content without context is incomplete, just as search without metadata is incomplete.  Here is another article from back in 1999: http://www.21cfrpart11.com/files/library/miscellaneous/metadata_cio_council0299.pdf

The key highlights are as follows:

  • Metadata is one of the biggest critical success factors to sharing information.
  • Metadata can make your information sharing and storage efforts great successes, or great failures. Metadata can get you in trouble with the law, or keep you out of such trouble.
  • The alternative to metadata management is information chaos.

Even the Government is starting to understand the importance of metadata for information sharing: http://civsourceonline.com/2010/05/06/new-report-suggests-using-a-metadata-process-to-improve-gov-info-sharing-accuracy/

I have not worked with a single company who has addressed the problems above, and that means that there is still information chaos within every single company.  I know this is a strong statement, but I am willing to stand by it!

There are some great search tools out there.  But search tools can only find what is ‘indexable’ (and a bit more, via combining it with text mining, and semantic approaches).  But this this is still not enough.  I strongly believe that what is needed is to track all metadata on the object level across the enterprise, and to combine this with search results, in a faceted result set.  Why is this important?  Because Enterprise Search is not Web Search!  We are not looking for Web pages, and ranking algorithms based on how many hyperlinks point to Site!  We are looking for documents, and we often need to find every single one of them, for compliance or other reasons.  The only way to do this is a faceted result set, which allows us to drill down precisely in the result set.  And metadata is the metadata is the ‘sorting mechanism’ that allows us to do that.  Now: what kind of metadata do we need exactly?  We need the following: taxonomy-driven metadata, folksonomy-driven metadata, user-defined metadata (on an individual level), and semantic (or meaning-based) metadata.

The above may sound scary and complicated, no doubt.  But the good news is that a whole generation of new technologies is coming along to solve this problem.  First of all, the Office 2007 System in and of itself is a revolutionary product.  For the first time, what we have is an ‘encapsulated nugget of information’ – that means that the metadata ‘travels with the document’, given that there is a separate ‘document part’ for metadata within the document.  This combines content and context, and solves the problem all legacy ECM systems have: when a document is checked out, it knows nothing about itself any more.  The document has been removed from the system, but its metadata still reside within a database table in the ECM system.  This leads to a huge compliance-related risk that companies are not equipped to handle.  But I will admit that only a small fraction of corporate content resides in Office 2007 today.  However, we have a great set of tools to manage metadata on the back end, on an enterprise level.  As I have written about earlier, the the NextPage Information Tracking Platform is able to track any content across the enterprise via its unique ‘digital threading technology’.  When all this comes together in an integrated fashion, we can finally start addressing the information chaos that has been reigning across the enterprise.  And, as I also stated earlier, it is not about technology (which is the enabler, of course), but more about people, processes and change management.  All this has to be seamless, easy to use, and the complexity has to be hidden from the end user.  But I think we are finally getting to that point.   And once we do, then the whole process of e-Discovery will become a far less onerous problem than it is today!  I know of several large companies who are spending between $10 million and $70 million just to address their e-Discovery requirements.  That is almost too hard to believe, but true.  Of course, when we think about the amount of money involved in class action lawsuits, then we can understand their motivation.  It still boggles the mind, though.

Update on the latest Innovations from Microsoft Research

I have blogged in the past (actually almost exactly a year ago – oh my, how time flies when you are having fun) about Scientist Innovation, and exciting things we are doing in the Research area.  Well, a year is a long time in the software industry, and so it is time for an update.  Among all the great things that are going on, I wanted to focus on the ones that are the most exciting from my perspective.  First of all, there is Pivot.  I have more thoughts than I can handle in a Blog about the potential applications for Pivot for visualizing data and images.  Be sure to check out this video: Gary Flake discusses Pivot @ TED2010  I am already talking to several customers about how to build Pivot Collections of DICOM images.  This could be huge, as medical diagnostic systems are generating vast amounts of image data that are used in Clinical Trials, and companies are having a really hard time managing all this.  DICOM images are typically stored in PACS systems.  These are relatively old systems built for a specific purpose.  Viewing and manipulating large image sets has not been the intent when designing them.  Another exciting development is the recent announcement of a Silverlight Control for Pivot.  This opens up the potential for almost everyone to access this great technology with a Browser.  The Pivot team is working double overtime to keep up with the exploding demand, and more exciting announcements are coming in the summer.  Here are some really great examples of Pivot in action: http://momcollection.cloudapp.net/   http://netflixpivot.cloudapp.net/   I could not help put on my thinking hat about how all this could work, and one of the ideas I am investigating is how to do even more with images and Pivot.  What if we could store the images in a repository that is ‘semantically aware’, so we could go beyond the limitations of file systems?  As it happens, the folks at Microsoft Research have already built such a repository called Zentity.  They call it a Research-Output Repository, but for me it is simpler to refer to it as a semantically aware repository.  I am starting to think about how to automatically pull out images and their relevant metadata from PACS systems and to create Pivot Collections and to use Zentity as the repository.  More to come!  Microsoft External Research has a whole range of great projects that are highly relevant to the BioPharma community.  See here for a great deck by Alex Wade.  The latest announcement from the Scholarly Communications Team is the Chem4Word add-in.  See here for a 3-minute introductory demo.   Finally, be sure to check out the Clinical Documentation Solution Accelerator (CDSA)  – a truly amazing free application that really shows off the power of the Office platform: http://www.mscui.net/CDSA.htm  The code can be downloaded from here: http://code.msdn.microsoft.com/cdsa

I have also been following closely the evolution of the Microsoft Semantic Engine.  See here for a PDC presentation to learn more.  The presentation deck can be found here.  Here is a great Blog posting about it.  I am really looking forward to seeing this technology make its way as a core element of the Microsoft stack, and to augment our Search and database technologies.  Among other things, the potential to integrate the Semantic Engine with solutions such as MetaPoint for metadata enrichment using semantic approaches is really exciting, and will solve many difficult problems in the Enterprise Information space.

Document Automation, Contract Management, LMS, Legal and Engineering Document Management Solutions for SharePoint

Qorus: http://qorusdocs.com/

Seismic: http://www.seismic.com/

Intelledox: http://www.intelledox.com.au/

VIRTUALIS by Alcero: http://www.alcero.com/EN/products/Pages/virtualis.aspx

ActiveDocs: http://www.activedocs.com/

MacroView: http://www.macroview.com.au/

docBlock Ascend: http://www.blackbladeinc.com/en-us/products/docBlock/Pages/default.aspx

Xpertdoc: http://www.xpertdoc.com/en/solutions  (document output automation)

XiDocs by Xinnovation: http://www.xinn.com

AdLib Software: http://www.adlibsoftware.com/PDFSharepoint.aspx  (PDF rendering and assembly)

Muhimbi PDF Converter: http://www.muhimbi.com/Products/PDF-Converter-for-SharePoint.aspx

Hyper.Net by Coextant: http://www.coextant.net/(document conversion and transformation)

Proposal Management by Octant Software: http://www.octant.com/

DocXtools by Microsystems: http://www.microsystems.com/products/docxtools-legal.php

SmartDocs by ThirtySix Software: http://thirtysix.net/

Sample code to Assemble Multiple Office Documents: http://code.msdn.microsoft.com/OOXMLv20CTP/Release/ProjectReleases.aspx?ReleaseId=2079         http://blogs.msdn.com/b/edhild/archive/2007/09/03/video-demo-of-extending-sharepoint-to-collab-on-document-fragments.aspx              http://blogs.msdn.com/b/brian_jones/archive/2010/01/04/document-assembly-merging-excel-powerpoint-and-word-content-together.aspx                http://cid-7354ef8f2fda474f.office.live.com/browse.aspx/.Public/Ed%20Hild%20-%20Word%20Automation%20Podcasts

Proposal Management approach for very large and complex documents and sets, based on the DITA standard for Topic Based Authoring: http://www.dita-exchange.com/

Contract Lifecycle Management by CLM Matrix http://www.clmmatrix.com/

Corridor Company: http://www.corridorcompany.com/

Dolphin Contract Manager: http://www.dolphin-software.com/contractmanagement.htm

DealBuilder Contract Express: http://www.business-integrity.com/products/contractexpress-for-sharepoint.html

ICERTIS: http://www.icertis.com/SitePages/index.aspx

Case Management: http://www.deltascheme.com/solutions/case-management/

e-Procurement: http://www.optimusbt.com/eprocure_product

Learning Management Systems:  http://www.sharepointlms.com/   http://www.point8020.com/Services.aspx  http://www.shareknowledge-lms.com/  http://www.elearningforce.com/products/Pages/SharePoint_LMS.aspx  http://www.itworx.com/Solutions/ConnectedLearningGateway/Pages/default.aspx                                                                    SharePoint Learning Kit on CodePlex:  http://www.codeplex.com/SLK

Legal Industry Solutions:

XMLAW: http://www.xmlaw.net/default.aspx  iLink Systems: http://www.ilink-systems.com/Industries/Legal.aspx   Handshake Software: http://handshakesoftware.com/HandshakeSoftware/tabid/109/Default.aspx

e-Discovery support:  WorkProducts: http://www.workproducts.com/  Navigant Consulting:  http://navigantconsulting.com/  Digital Reef: http://www.digitalreefinc.com/

CAD and Engineering Document Management: http://www.cadac.com/organice/en/solutions/Pages/default.aspx                                  http://www.software-innovation.com/en/products/ProArc_/pages/ProArcforEngineeringandConstruction.aspx

Digital Asset Management (DAM) solutions: http://www.equilibrium.com/eq-software/mediarich-for-sharepoint/overview/   ADAM for SharePoint: http://www.adamsoftware.net/

Rich Media solutions:

Kaltura Video & Rich Media Extension for SharePoint:  http://corp.kaltura.com/Video-Solutions/Enterprise    VIDIZMO for SharePoint: http://www.vidizmo.com/ProductsAndServices/SharePointVideoPortal.aspx?link=ProductsAndServices&expandable=0    Mediavalet: http://www.mediavalet.co/  Polycom RealPresence Media Manager: http://www.polycom.com/products/uc_infrastructure/realpresence_platform/video_content_management_solutions/enterprise_video_capture/realpresence_media_manager.html

Update on the Intelligent Content 2010 Conference

I recently returned from the Intelligent Content 2010 Conference.  It was the most insightful and useful conference on Content Management that I ever attended.  I had the privilege to be invited as the Keynote speaker on Day 1 by Ann Rockley, who I regard as one of the thought leaders in this area.  It was a real honor to be speaking along the likes of luminaries such as Bob Glushko.  I have posted my presentation here.  Many thanks again to Ann and Scott Abel, the Content Wrangler, who helped me with reviewing the deck to make sure it is high level and product focused.  Scott also posted an interview with me here.

I found it absolutely fascinating that many on the audience were Twittering away during the presentations, and adding ‘Twitterables’ or ‘Notable Quotables’.  Social Media in action, and applied to business scenarios.

As far as the conference is concerned, every single presentation was great!  I was often torn which session to attend, between the two tracks.  The organizers did an absolutely fabulous job with selecting the content and the speakers.  I learnt from every one of them.  Given my specific interests related to Content Management, the presentations by Joe Gollner, Noz Urbina and Paul Wlodarczyk resonated most.  I was also absolutely fascinated by the presentation and demo given by Natasja Paulssen and Arjan van Rooijen of Quatron.  I have many ideas on how their work can be applicable to some of the work I am doing with customers, and how some of the latest technology coming out of Microsoft Research could be tied into their solutions.  Very exciting stuff!  And finally, how could a conference on Intelligent Content be complete without a thundering and fascinating closing session by Mr. Scott Abel, the Content Wrangler!  It is hard to describe what Scott does, without having witnessed him in action – but the man is really amazing, and a true techno-visionary.  And he is extremely entertaining!

Once again, thank you Ann, Scott and all the members of the Rockley Group for a superb job with the event, and I hope to be able to return next year to learn more, and to give the audience an update on where we are with the Intelligent Content Framework.  

Update 4/29/10: Someone just sent me a link to a nice Blog post from Glenn Emerson here: http://www.gemersonconsulting.com/?p=132

SharePoint 2010 – the new Information Operating System for the Enterprise

I spent last week at the SharePoint Conference 2009 in Las Vegas.  It was absolutely amazing!  More than twice the size of the last conference, and completely sold out.  Over 7,400 attendees from all over the world, which in itself is amazing, considering today’s economic climate.

I talked to a lot of customers and partners, and they all spoke in superlatives.  Here are some quotes:

“Amazing”

“Game-changing”

“Transformational”

“The new Information Operating System for the Enterprise” 

I agree with all of these statements!

For a good sense of what is coming, check out the SharePoint Team Blog:

http://blogs.msdn.com/sharepoint/archive/2009/10/19/sharepoint-2010.aspx and Joel Oleson’s Blog: http://networkedblogs.com/p15682036

Here is a good overview: http://officebeta.microsoft.com/en-us/sharepointserverhelp/whats-new-in-microsoft-sharepoint-server-2010-HA010370058.aspx

Additional technical information can be found here: http://technet.microsoft.com/en-us/sharepoint/ee518662.aspx

Over the next few weeks, more content will be posted here: http://sharepoint2010.microsoft.com/Pages/default.aspx

When it comes to ECM, I am particularly excited about two things: Word Automation Services: http://blogs.msdn.com/microsoft_office_word/archive/2009/10/26/introducing-word-automation-services.aspx  and the rest of the major ECM-related upgrades: http://weblogs.asp.net/erobillard/archive/2009/10/20/scaling-sharepoint-2010-from-small-libraries-to-massive-repositories.aspx  Here is another good high-level summary, focused on ECM: http://aiim.typepad.com/aiim_blog/2010/01/8-reasons-sharepoint-2010-looks-like-a-true-ecm-system.html

We are entering a new era in Enterprise Information Management, and this is possibly the most exciting period in the history of SharePoint.  I am glad to be part of it!

The Evolution of Content Management and the Emergence of Intelligent Content

I am starting to gather my thoughts about the upcoming Intelligent Content 2010 Conference, where I have the honor of being Keynote Speaker, next to a luminary such as Bob Glushko.  I am in illustrious company, so I had better deliver a great session.  Nothing like putting some pressure on myself 🙂

I tend to think in terms of Enterprise Content being evolutionary in nature.  I was thinking of an illustrative slide showing a good image of evolution.  Thankfully, there is Bing Search with some great images I will be able to use (I especially like the one which shows people evolving into ‘PC Apes’).  To me, the Evolution of Content into ‘Intelligent Content’ bears some similarity to the evolutionary process that led to Homo Sapiens, the ‘knowing man’ (and I do not intend to get into a discussion about Creativism vs. the Theory of Evolution here, even though DITA refers to Darwin…). 

This brings me to another important point: what exactly is ‘Intelligent Content’.  It is a fairly new definition that thought leaders in the ECM space such as Ann Rockley started using a few years ago.  I found a great interview with her titled What Constitutes “Intelligent Content”? which describes Intelligent Content in this manner: Intelligent content is structurally rich and semantically aware, and is therefore automatically discoverable, reusable, reconfigurable, and adaptable.  I also found an excellent article by Joe Gollner about The Emergence of Intelligent Content, with the subtitle ‘The evolution of open content technologies and their significance’  where he focuses mostly on the evolution of Structured Content.  

Indeed, what we need much more of in the ECM space is Intelligent Content, and what we need less of is ‘dumb content’.  Interestingly enough, my friend Gerald Kukko and I talked many years ago about the need for a ‘self-contained Nugget of Information’.  These self-contained nuggets would contain content and its associated semantically rich metadata, and could be assembled according to certain assembly rules.  Finally, the OpenXML file format gives us such a self-contained nugget, but that alone does not make it Intelligent Content.  We also need to enhance the metadata with semantics, and add the assembly rules.

Another way to approach Intelligent Content is to draw a parallel to Product Lifecycle Management (PLM) in Manufacturing.  PLM is a well-researched and well-documented topic.  It is about how to track and plan all the parts of a complicated assembly, such as a machine.  Interestingly enough, all the experts at PLM who I have talked to failed to recognize that there is also a need to track all the documentation and its lifecycle associated with this machine, especially if there are many modular and re-usable parts.  And even the people who are familiar with DITA for Technical Publications usually miss the bigger picture of the Lifecycle of Content supporting PLM processes.  I recently had an interesting conversation about this with my friend Jim Averback, and we came to the conclusion that when it comes to managing documents, most of these smart people are spending time and effort to fix existing, fundamentally broken processes – this is especially true in the Life Sciences industry, where there is absolutely no notion of structured content authoring and content re-use, even though this is one of the industries that would benefit most from this approach.  With all the sophistication of PLM systems and concurrent engineering, the documentation processes are still run as if they were medieval guilds.  This is pretty incredible, but we have seen evidence in many places that prove that this statement is accurate.

I have blogged about our own Intelligent Content Framework (ICF) initiative before.  I do believe that this represents one of the most advanced states of the Evolution of Content.  It builds on DITA, and takes it further.  Here are some of the key tenets that I am sure some of the ‘DITA purists’ might not agree with.  Perhaps I have the benefit of being an engineer, and to me this is all pretty much just engineering.   1.) The power in DITA above and beyond using it for technical publications and other complex publishing applications is that it provides the Information Model that is missing from today’s ECM systems.  The current approach of ‘folders within folders within folders’ and Virtual Documents is not an Information Model.  2.)  DITA can be applied to individual documents, and not only topics, and can be applied to complex specialized document structures like eCTD.  3.) DITA alone is not sufficient, because its metadata model is limited (but extensible), and focused on publishing applications.  So we are enhancing DITA with rich metadata and an enterprise metadata model via MetaPoint.  4.) Word and Office OpenXML can be the platform for a powerful native DITA Editor that anyone can use, as long as we hide the DITA and other XML complexity behind the scenes.  All the user has to focus on is the science and the writing, and not the formatting and other requirements, and the application has to look exactly like any other Word application.  5.) In most cases, you do not really need specialized XML databases to store DITA Topics, and to be able to query these.  For ICF, we can do this very well by using a standard ECM system like SharePoint to store the topics and their related metadata, and enhance it with faceted search.  6.) Collaboration is very important.  Modern ECM systems need integrated tools for collaboration (workflow, tasks, email notification etc.)  Again, all this is built seamlessly into SharePoint, making it an ideal platform for ICF.

As stated in an earlier Blog post, ICF also needs to be complemented by content modeling, content design, content strategy, content reuse strategy, taxonomy, workflow and so forth: we call this Intelligent Content Design (ICD).  This is the focus of the effort Jim is working on now.  He is building a tool called DITA-Talk to support ICD.  What is also very exciting to me is that he is leveraging additional elements of the integrated Microsoft stack.  DITA-Talk is being built on WPF and WCF.  I recently saw a preview of where he is going with it, and it is one of the most exciting applications I have come across in the world of Enterprise Content Management.  Imagine being able to visually design a Document Process, and the end result would be an automatically generated DITA Map, along with some content that it automatically pulled into Topics from back-end systems (including database tables) and existing topics.  We need to move the world of Enterprise Content Management into an evolved state – and we are finally doing that!

Of course, Evolution does not mean that previous species die out right away – for a while the Old and the New will co-exist.  This is why I believe that in the near future, a modern Content Architecture for a Life Sciences company will look something like the diagram below, with Topic-based Authoring and ICF an integral part of the overall architecture:

Compliant ECM Architecture of the Future

Infonomics Magazine – Paperless Clinical Trials

Infonomics magazine just published a very useful article by Ken Lownie of NextDocs, titled ‘In Search of "Paperless" Clinical trials’: http://www.aiim.org/infonomics/in-search-paperless-clinical-trials.aspx  

I am reminded of a related article I read a few years ago called ‘Tortured by Paper’: http://www.bio-itworld.com/archive/081302/tortured.html  It is exciting to see that SharePoint is emerging as a platform of choice for managing Clinical Trials.  See my related Blog Post as well about Clinical Trials and the Microsoft platform.