Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How do people manage the development process?

Status
Not open for further replies.

PaulusMagnus

IS-IT--Management
Mar 6, 2008
1
CA
We're currently embarking on our LiveLink installation and a question has come up repeatedly about the development process and how to manage it. So how do other people do it?

We're planning to have DEV, TEST, QA and PROD environments where new business processes are developed in DEV, promoted to TEST for application level integration testing, promoted to QA for pre-production testing, infrastructure integration testing, stress and performance testing before finally pushing the changes through to PROD. As each content management initiative could consist of folders, permissions, workflows, etc. How do people bundle these changes up into a changeset so they can be promoted and rolled back? Do people use third-party tools or just use manual methods of change control?

Any tips or advice on governance of the livelink development process would be greatly appreciated. Thanks.
 
Pls go to communities website and get a copy of "10 healthy habits admins need to know" that is an invaluable source.

Also take a look at understanding the architechture.I can't draw here but I will try to explain

A Prod LL install
-One Or More Windows(Unix) servers and webserver
-Livelink code files are under the root of install
-They connect to a database as a schema user
-They probably have a EFS

A Dev/QA/TEST install can be made by
-Replicating the production modules
-Copying the production database and bringing it as DEV
-To maintain EFS copy EF-If you want to test search then re-index

Most of the times this is how you get ready for upgrading a LL install

Most Livelink objects are xml transportable between systems.Livelink maps are freely exportable.Livelink executing maps are table info if you can replicate tables by cloning them you will have the same info.

Permissions are also adhered to if you clone the database.

has excellent tools for making this all work under the covers.

Well, if I called the wrong number, why did you answer the phone?
James Thurber, New Yorker cartoon caption, June 5, 1937
 
PaulusMagnus - I am the IT Architect for ECM at a large Insurance company. We are currently setting up a LiveLink environment for enterprise use and have asked the exact same question about controlling the development process. In all other environments (.Net, Java, SAP, etc), the process of moving solutions from one environment to another is very controlled - to ensure the correct changes are applied, don't negitively impact other projects, and also to support potential rollback.

However, LiveLink appears to be lacking in this ability. OpenText does not really appear to understand the potential problem and has indicated that moving objects via XML files is what others do.

I'm not convinced. If there is nothing better, than we may need to copy the XML object files into our existing Code Mgmt System to ensure that the changes are correctly tracked and approved before migration. In addition, I guess we would have to take a full image of the database etc to support rollback.

Please update the post if you find better information. I think this is a serious gap that OpenText needs to address if LiveLink is to become a serious enterprise application platform.
 
Some companies have developed slightly more complex methods of moving data between systems than a simply XML or Project Import/Export. One of them is Causeway ( who have an Import/Export Tool which is very good for this kind of thing.

Greg Griffiths
Livelink Certified Developer & ECM Global Star Champion 2005 & 2006
 
Migrating data across Livelink instances has always been a big problem. I was one of the original Livelink core developers and from our persective we never had this problem because we never faced it in our day to day development operations. It was only when I left Open Text and decided to do a little consulting work that these issues became apparent. My first outside consulting job resulted in an application taxononmy of over 15,000 objects. Trying to recreated that taxonomy on different instances was a nightmare so I developed a new module that did it automatically.

As appnair mentioned in a previous post (thanks for the plug), our company, Global Cents, has a product that helps to alleviate the pain of this data migration. It is based upon the XML Export/Import system built into Livelink. It adds a GUI layer to allow an admin to select which items should be exported. It also allows for exporting/importing of Users and Groups. On import, all the objects with their permissions and category data are imported. It also maintains all the links to other objects that exist inside workflow map definitions. When you import your workflow maps they still have their attachments folders, their links to the form templates, and links to any categories or Livelink objects maintained. They also retain their step assignments.

The next version of this module, being release in May, also allows for the import to update existing objects in Livelink. This is needed for making changes to existing applications within Livelink. These updates include optionally modifying the meta data, permissions, category data, and adding versions.

To see a small recorded demo of the module you can visit the Global Cents web site at:
The narrator of the demo is Tammy Jakubowski (McMahon) formerly of Open Text.
 
Also for Jeff if the module does not already do this a mechanism of merging two livelink instances so that dataid's of the first system and datid's of the second system co-exist or a graceful merge.I don't know how you will do that maybe it is a fundamental flaw in OT dataid allocation.I was recently working in DCTM which used a better method in objectid allocation so that no two DCTM instances would generate identical node id's.


Well, if I called the wrong number, why did you answer the phone?
James Thurber, New Yorker cartoon caption, June 5, 1937
 
Appnair,

The system does not try to merge the DataIDs, but creates a new ID for each imported object. The module creates a map of the ID on the source system and the new ID on the target system. The rest of the module knows how to look for the mapped ID instead of the original ID.

Using the XML base code in Livelink allows the module to add, and update, objects in Livelink using the exact same code that would be processed if a user were to do it by hand. That fact allows for transfer of data between Livelink versions. Often I've exported data from a 9.1 instance and imported it into a 9.7 instance.

Trying to have each instance generate unique ids from other instances would be a nightmare. We use the tool to deliver our custom services work. We also use it to deliver our power applications products which are complete Livelink applications. If each of our customers had to have their instances generating unique IDs for each of their own internal systems, plus keeping them unique from any other customers IDs would cause a shortage of IDs in Livelink.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top