Eric J Ostrander's

ClearCase / ClearQuest / Git/Stash "how to" pages.

ClearQuest, how do I ...


BIRT   |   CC non-UCM integration   |   CC UCM integration   |   Databases   |   Email   |   Forms & Fields   |   Hooks / API   |   Misc.   |   Packages   |   Project Tracker   |   Queries, Charts & Reports   |   Records & Record Types   |   RPE   |   Schemas   |   Security   |   States & Actions   |   Users & Groups   |   UCM   |   Web

This page is my own personal work. Anyone can use it for their own edification, but must realize that this material is not supported by me and the site is not affiliated with IBM Rational Software or Atlassian.



BIRT
Back to the TOP
General.
Dynamic filters.
Case-insensitive sorting.
Create a custom style.
Place multiple elements in a single cell.
Create a formatted report title.
Insert dynamic text.
Change the query associated with a report.
Create a report template.

Databases
Back to the TOP
Create a new database.
Set a test database.
Toggle the "Visible to the designer only" switch.
Remove a database.
Determine the schema repository database.
Set up SQL Anywhere.
Determine the schema used by a user database.
Upgrade a user database.
Logically delete a database if the physical database is gone. (pdsql)
Rename a database set (dbset) connection name.
Index history records.
Toggle a database between Test and Production modes.
Move a schema repository's physical database.
Name a dbset (master database).
Set the schema repository data code page value.
Interpret the values of ratl_priv_mask in a backend db table.
Rename a database.
Enable multiline text searches in Oracle: CLOB.
Unlock a user database or schema repository.
Log into a test database via the web.
Upgrade a user database programmatically.
Configure (or determine) a schema repository LDAP authentication.
Add/remove users to/from LDAP authentication.
Basic SQL. (SQL Editor)
Determine the next dbid that will get used. (getrecordlimits)
Obtain the database record ID allocator (dbid) limit. (getrecordlimits)
Increase the record ID (dbid) limit (sethighrecordlimit)
Rational ClearQuest Diagnostics utility
CQ feature levels
Use pdsql.
Manually set the db column name.
Update schema repository after copying data from another database.
Register a schema repository.
Make a copy of a database.
Generate a summary of database details.
Get a programmatic list of user databases.
Determine a database set if you know the user database.
List the database sets (dbset, schema repo) known to a Unix server.
Add a schema repository (dbset) connection to a Unix server.
Remove a schema repository (dbset) connection from a Unix server.
Programmatically determine a local dbset name if you know the underlying SID.
View which server a schema repository is associated with.

Email
Back to the TOP
Add free-form text to an Email Rule.
Set up email notifications.
Lock down certain Email Rules.
Create an Email_Rule record.
Modify an Email_Rule record.
Set up the Rational Email Reader.
Embed URL's in description fields. (mailto)
Include attachments with an email.
Force a user to enable email notification.
Programmatically send emails.
Install the EmailPlus package.
Basic EmailPlus rule.
EmailPlus tags.
EmailPlus Advanced Rule.
EmailPlus Subscribers.

Forms & Fields
Back to the TOP
Allow a user to set default values.
Create a new field.
Set up a parent/child control field.
Remove a field.
Toggle whether a field is mandatory.
Create a pull-down menu field.
Add help text to a field.
Create a dynamic/static/dependent choice list field.
Make a field read-only.
Add a push button form control.
Add a combo box form control.
Add a new tab.
Restrict read access.
Move a field between tabs.
Remove a tab.
Clone a form.
Export/import a form.
Create a "Keywords" type control.
Export/import dynamic lists.
Create a variable/dynamic Keyword list.
Add an image to a form.
Add a Duplicate form control.
Add an Option (radio button) control.
Set the field tab order.
Ensure consecutive record IDs.
Enable multiline text searches in Oracle: CLOB.
Embed URL's in description fields.
Date/time form control WARNING.
Dynamically change a field's label on a form.
Manually set the db column name.
Scroll bars.
Include an ampersand (&) in a field label.
Backfill data from one field to another en masse.
Difference between Drop-down Combo and Drop-down List boxes.
Determine if a dynamic list is being used by a schema.
Resize a form in the Eclipse Designer.
Create a list of fields changed during an action.
Programmatically retrieve the contents of a dynamic/named list.
Set up a web dependent field.
List web dependent fields.
Get a field's database column name.
Determine if a field is being displayed on a form.
Determine the max length of a field.
Get a field's title (as opposed to its name).

Hooks / API
Back to the TOP
General hooks.   (general)
Remove a field/action hook.
Create a permission field hook.
Create a validation field hook.
Create a global script.
Create a record script.
Get an array of values for a specific user group's field.
Get a user value based on another user value.
Automatically transfer record mastership.
Web hooks.
Set permission by user.
Set permission by group.
Get active records (users,projects,etc).
Ensure consecutive record IDs.
Create an Action Notification hook.
Ensure the specified date is not in the past.
Use regular expressions in VBScript.
Determine the unique key field(s) of a record type.
GetEntity if the record type has more than one unqiue key.
Build, utilize, and refresh session variables.
Determine the most recent checkedin version of a schema.
Debug a hook.
Save a detailed history (AuditTrail) of changes.
Copy hook code from one location to another.
Date/time Value Changed hook WARNING.
Add custom PERL modules to cqperl.
Check a Windows registry setting.
Ensure all hooks run in nested actions; CQHookExecute.
Get a list of entity defs.
Access a website from within CQ.
Determine the name of the current user db.
Determine the current record type/name.
Get a list of attachments.
Determine the name of the current dbset.
Programmatically determine which entry is selected in a list box.
In Perl, ensure all values in an array are unique.
Find all back reference fields in a schema.
Get a list of dynamic lists.
Set another field's choice list.
Only run the hook in the client interface.
Do a case-insensitive sort for a choice list.
Compare dates subroutine.
Get a listing of all fields in a record type.
Determine if a record is being modified inside batch update.
Programmatically add/remove entries to/from a reference list field.
Programmatically get the members of a dynamic list (named list).

Misc.
Back to the TOP
Licenses.   (general)
Determine the version of CQ.
Determine installation media type.
Batch update of records.
Import data into CQ.
Patch CQ.
Print from CQ.
Index database tables.
Performance.
Turn on/off tracing for native Window clients.
Send messages to the user.
Uninstall CQ on Windows.
CQ Release area siteprep.
Customize the AuditTrail package.
Set up pessimistic record locking.
UNC paths.
Dynamically create an HTML document.
Backfill data from one field to another en masse.
Open Services for Lifecycle Collarboration (OSLC).
Programmatically create an Excel spreadsheet using Perl.
Programmatically work with Word docs using Perl.
Pass a $session variable to a different Perl script.
Schema design DOs and DON'Ts.
Remove/scrub history records.
Tell CQ to use a different database driver.
Database views.
Delete a dynamic/named list.

Packages
Back to the TOP
Install a package into a record type.
Install a package into a schema.
Remove a package.
Determine schema packages and upgrade level.
Edit schema packages. (packageutil)
Register a custom package. (packageutil)
Remove a package from a record type.

Project Tracker
Back to the TOP
Project Tracker overview.

Queries, Charts & Reports
Back to the TOP
Run a query.
Create a new query.
Run a chart.
Create a new chart.
Run a report.
Create a new report using Seagate Crystal Reports.
Edit an existing chart's parameters.
Create a report using Rational SoDA for Word.
Export/import/copy/share queries, charts, and reports; public or personal. (bkt_tool)
Rename a query.
Delete a query.
Extend SoDA source domains.
Enable multiline text searches in Oracle: CLOB.
Enable a report to display local time zone date/times.
Have a query run at startup.
View/edit the SQL equivalent of a query.
Add record counts to a report.
Run a query on a date field to find those dated "yesterday".
Sort an API query.
Determine a record's dbid.
Edit SQL query language/code.
Restrict queries, charts, and reports to groups.
Rename a field (replace a string) across all queries.
Programmatically update and save an existing query.
Programmatically run an existing query.
Have a query only return the latest history timestamp for each record.
Customize the column header name in a query's result set.
Group filters together in the eclipse client.

Records & Record Types
Back to the TOP
Set the default record type.
Create a new stateless record.
Create a stateless record type.
Set the stateless record type unique key.
Create an Email_Rule record.
Modify an Email_Rule record.
Create a record type.
Create a record type family.
Remove a record type or family.
Rename a record type or family.
Duplicate an entire record type.
Hide records/types from users/groups. (security context)
View a record's history.
Remove a record's history.
Clone a record type.
Delete a record.
Count history records.
Export/import dynamic lists. (importutil)
Export records from a CQ database.
Retire a record type or family.
Print a record.
Submit a record via email.
Display a record from an external script.
Install a package into a record type.
Ensure consecutive record IDs.
Remove a package.
Import data into CQ.
Open a record for edit from a URL.
Programmatically add an attachment.
Update multiple records at once.
Delete history records.
Get a record type's database table name.
Record templates.
Change the State of a record using SQL.

RPE
Back to the TOP
General.

Schemas
Back to the TOP
Create a new schema.
Install a package into a schema.
Remove a schema.
Rename a schema.
Import a schema.   (cqload)
Export a schema, version of a schema, or record type.   (cqload)
Determine databases associated with a schema.
Determine the schema repository database.
Determine schema packages and upgrade level.
Export/import dynamic lists. (importutil)
Edit schema packages. (packageutil)
Edit the same schema in two different schema repositories.
Debug a schema. ($session->OutputDebugStatement)
Delete a schema version.
Embed an instruction manual.
Build, utilize, and refresh session variables.
Update a schema version in a different dbset without CQ MultiSite.
Determine the most recent checkedin version of a schema.
Set the schema repository data code page value.
Save a detailed history (AuditTrail) of changes.
Determine who has a schema checked out.
Check in a schema checked out by a different user.
Delete a record type.
Delete a UCM-enabled record type.
Start an adminsession.
Rename a record script.
Ensure traceability of schema versions across schema repositories.
Programmatically get a list of schema repositories.
Restart a schema at version 1.

Security
Back to the TOP
Hide records/types from users/groups.
Make a field read-only.
Restrict tab read access.
Restrict write access to CQ objects.
Create a permission field hook.
Set up user password authentication.
Restrict web access.
Administer dynamic choice lists.
Administer public queries folder.
Administer security at a site.
Restrict the list of users/groups seen by users.
Reset CC/CQ integration password and database.
Update user/group information in a user db from an external script.
Restrict access to specific records.
Change user information from the Client.
Create new users and groups.
Log into the web without manually typing in a username and password.
Subscribe all users to new db.
Configure a schema repository for LDAP authentication.
Set users up for LDAP authentication.
Set up electronic signatures (eSignature).
Set up a user as strictly readonly.
Add/remove users to/from groups.
Restrict queries, charts, and reports to groups.
Determine a user's authentication mode.


States & Actions
Back to the TOP
Create a new state.
Create a new state action.
Create a stateless record type.
Set a default action for a state.
Set the order in which the actions are listed.
Disallow an action in certain states.
Remove a state.
AMStateTypes
Nested actions.
BASE actions.
View the State Transition Matrix (State Section).

Users & Groups
Back to the TOP
Users & groups.   (general)
Restrict read access.
Create a CQ group.
Create a CQ user.
Restrict write access.
Import user information.
Set up groups within groups.
Reset the CC/CQ integration CQ login.
Add a user not mastered locally to a group.
Delete a user.
Unsubscribe a user.
Determine a user's groups.
Determine a group's users.
Create a new user with an admin session.
Copy missing users from db to another.
Change a user's login_name.
Determine user record mastership with the API.
Determine a user's authentication mode.
Push user information into a user database.

Web
Back to the TOP
Restrict web access.
Use SQL Anywhere with CQ web.
Web hooks.
Log into the web without manually typing in a username and password.
Restart web services.
Turn on Java web tracing.
Load balance request managers on multiple web servers.
Get web server status.
Rename java.exe web services.
Performance tune new CQ web.
Log into a test database via the web.
Create a URL for a record.




General.
Version: 7.1.2
Updated: 12/18/12
BIRT: Business Intelligence and Reporting Tool. BIRT comes bundled with CQ 7.1+ Eclipse client installs. To use the report designer, see: https://publib.boulder.ibm.com/infocenter/cqhelp/v7r1m0/index.jsp?topic=/com.ibm.rational.clearquest.user_ec.doc/topics/t_birt_reports.htm
Note that it appears the above link has an error in it. The top of the page indicates that the instructions are for the VB client and you need to see BIRT online help to work with the Eclipse client. However, the above link seems to be for the Eclipse client.
However, the above link is just a basic get started page. To perform a detailed tutorial that works with report formats, see http://publib.boulder.ibm.com/infocenter/rmc/v7r5m0/index.jsp?topic=%2Forg.eclipse.birt.doc%2Fbirt%2Fbirt-03-10.html
When looking for information/documentation, look for BIRT RCP (Rich Client Platform), which in ClearQuest's case the RCP is Eclipse.

One downside of BIRT reports is that BIRT is external to and separated from CQ. You can't run BIRT reports from within CQ. You can run them from within an Eclipse client, which can also run CQ, but not at the same time. BIRT can be set up on a web server such that the CQ Report Server runs it, which can be useful to centralize the reports in a way, but they can't be run from the fat client, say to give users the ability to print a single, formatted record.

Table of Contents





Dynamic filters.
Version: 7.1.2
Updated: 12/21/12
BIRT reports can prompt the user for information at run time. It's useful to have a single report format defined that has configurable parameters, such as Project, or the ability to extract a single record in a formatted way.
1) In the Eclipse client, File -> Open File. Locate the .rptdesign file.
2) Window -> Show View -> Data Explorer.
3) Right-click on Report Parameters and select New Parameter. Give it a Name and Help text, plus other parameters, such as "Do not echo input" if prompting for a password. Click OK.
4) Right-click on the data set and select Edit. Click on Filters. Click New. In the Expression field, use the pulldown menu to select the field whose value is to be prompted. Select the Operator. In the Value 1 field, in the pull down menu select "". Select Report Parameters, --All--, and then double-click on your-parameter. Click OK in the Expression Builder. Click OK in the New pop up. Click OK in Edit Data Set.
5) File -> Save.
WARNING: If you Preview the report in the report designer, the value you enter when prompted is saved as part of the general "save" the tool automatically does when you Preview a report. That is, the entered value becomes hard-coded. So, while the system will still prompt you to enter a value when the report is run, it will now have a misleading default value, which probably isn't desirable in most cases. If you want to "preview" it, select File -> View Report -> format.

Table of Contents





Case-insensitive sorting.
Version: 7.1.2
Updated: 12/20/12
In the layout editor, hover over the table to be sorted. Click on the Table tab that dynamically appears in the lower-left of the table. In the property editor, select the Sorting tab. On the right, select Add and choose the column(s) to be sorted. It's unknown right now how one sets the sort order.
To make the sort case insensitive, select the column (sort key) and click Edit. In the Key field, add exactly ".toUpperCase()" to the end of the entry.

Table of Contents





Create a custom style.
Version: 7.1.2
Updated: 12/20/12
Custom styles allow you to apply the same properties to many report elements. If you change the style definition, all elements using that style will automatically pick up the change.
1) In the main menu bar, select Element -> New Style...
2) In the Custom Style field, give style a unique name.
3) Select the properties on the left-hand side and adjust the parameters on the right-hand side.
4) Click OK.
5) To apply a custom style to an element, select the element in the layout editor. In the properties sheet, select the Style from its pull down menu on the General tab.

Table of Contents





Place multiple elements in a single cell.
Version: 7.1.2
Updated: 12/20/12
In the layout editor, if you drag multiple elements to a single cell, they are simply stacked one on top of the other. To customize the relationship (format) among the elements, double-click on the cell and then click the fx to the right of the Expression field. In the Expression Builder you can add several data columns by selecting a Category "Available Data Sets", a Sub-category (your specific data set), and then double-clicking on the data field names to add them.
To concatenate data, using "Lastname, Firstname" as an example:
	dataSetRow["CONTACTLASTNAME"]+", "+dataSetRow["CONTACTFIRSTNAME"]
Table of Contents



Create a formatted report title.
Version: 7.1.2
Updated: 12/20/12
Report titles can be created using either a label element, a text element, or a data element:
- The label element is suitable for short, static text, such as column headings.
- The data element is suitable for displaying dynamic values from a data set field or a computed field.
- The text element is suitable for multi-line text that contains different formatting or dynamic values.
1) In the layout editor, select Window -> Show View -> Palatte.
2) Drag the text element from the palatte to the layout and drop it above all the other tables and text. If the upper-most element is already at the top of the layout, dropping the text element to the left will automatically put it on top.
3) In the Edit Text Item pop-up, select HTML in the pull down menu at the top.
4) Edit the title using HTML tags. For example:
	<CENTER><B> 
	Customer List 
	</B></span><BR> 
	<FONT size="small">For internal use only</FONT><BR><BR> 
	Report generated on <VALUE-OF>new Date()</VALUE-OF> 
	</CENTER><BR><BR>
For standard HTML tags, see http://www.w3schools.com/tags/default.asp.
For BIRT dynamic text, such as , see insert dynamic text on this page.

Table of Contents





Insert dynamic text.
Version: 7.1.2
Updated: 12/20/12
Dynamic text is textual information that is determined when the report is run, such as the current date/time or who the person running the report is.
The instructions for adding dynamic text are fairly clear, but information on what each of the dynamic text entries does and is configured is hard to come by.
1) Double-click on the text element.
2) In Edit Text Item select HTML in the menu at the top.
3) Select Dynamic Text in the menu on the left.
4) Single-click just to the right of that menu.
5) In Expression Builder, select a Category, Sub-Category, and Double Click to insert.
6) Click OK to close the Expression Builder.
7) Click OK to close the Edit Text Item.
8) Preview the change.

Note that most of things (functions) that can be placed in the tag are Java and to get them to be interpreted correctly requires additional Java scripting.

Table of Contents





Change a report's query.
Version: 7.1.2
Updated: 12/20/12
When a report is created, it's bound to a CQ query. You can change the query by right-clicking on the data set, selecting Properties, and changing the query it's pointing to. After making that change, right-click on the data set and select Refresh.
Note, however, if the new query doesn't return at least the same fields as the previous query, even though the field that is different isn't used in the report format, it seems to be cached in the XML. That is, even though you aren't referring to some unused field in the layout, when you go to Preview using the new query, it will produce an error that it can locate the column data. The new query is returning that column, and even though the column isn't used in the layout, the system is still remembering it from before. It's unknown how to get it to forget about the old, unused column. The workaround is to have the new query return all the fields that the original query did, even if they aren't used. The new query can return more new fields though.

Table of Contents





Create a report template.
Version: 7.1.2
Updated: 12/21/12
See http://www-01.ibm.com/support/docview.wss?uid=swg21584423
1) In the CQ Ecplipse client 7.1+, File -> New -> New Template.
2) Give the template file a name. The extension must be "rpttemplate".
3) Give the template a display name. This is the name users will see when selecting a template.
4) In the Data Explorer tab, right-click on Data Sources and select New -> New Data Source. This is the CQ database in which the query is located. Select ClearQuest Query Data Source and click Next. Give the data source a meaningful name, such as the db set name and db name, like: PRODCQ_C9HLT. Click Next.
5) Specify the CQ dbset and db. Provide a login and password. Optionally click Test Connection. Click Finish.
6) Back in the Data Explorer tab, right-click on Report Parameters and select New Parameter. Enter "UserId" in the Name field. Add a short string of Help Text, such as "Specify your ClearQuest login". Click OK.
7) Back in the Data Explorer tab, right-click on Report Parameters and select New Parameter. Enter "Password" in the Name field. Add a short string of Help Text, such as "Specify your ClearQuest password". Select the "Do not echo input" checkbox. Optionally, if null passwords are allowed, de-select "Is Required". Click OK.
8) Bind the new parameters to the data source. Right-click on the data source created above and select Edit. Click on Property Binding. Click the fx next to User Name. Choose Report Parameters -> All and double-click on {}UserId. Click Ok. Back in Edit Data Source, click the fx to the right of the Password field. Select Report Parameters -> All and double-click on {}Password. Click OK. Back in Edit Data Source, click OK.
9) Bind to a data set. Select Data Sets -> New Data Set. Click Next. Select a public. Unless this template is strictly for personal use, the bound query should be in the Public Queries folder.
10) Design the layout. Instructions for doing this are elsewhere.
11) Save the changes. Select File -> Save. Note that selecting Preview does a save of the report.
12) Register the template for use in the CQ query wizard. Select File -> Register Template with New Report Wizard. Make changes to the template name and description as desired. Click Finish. Note that the template is only registered on the current workstation.
13) Generate a report that uses the template. In CQ Eclipse client, File -> New Report. Give the report a name and storage location. Click Next. Select the template created above. Click Finish.

Table of Contents





Create a new database.
A database is the repository for records that share a common schema. If customizing CQ, it is highly recommended that one create a "test" database in which you test your schema modifications before commiting them to an actual database. There cannot be an open schema during the database creation. In the CQ designer, Database -> New Database... and follow the prompts. The "Logical Database Name" must be between 1 and 5 characters and can only contain letters (upper or lower case), numbers and underscores. If creating an Oracle or SQL Server database, the physical database must have been set up prior to and independent of the database declaration in CQ. If using MS Access, the physical database will be created for you during the process. You must have already created a schema to associate with this database.
NOTE: In CQ 2001-, if the database is to be used for testing, be sure to select "Visible to the designer only". In CQ 2001A+, click the box that states "Test Database".
NOTE: New databases cannot be created from a client install.
NOTE: Microsoft Access databases should only be used for test databases, as they have limitations such as: only 10 max users can access the database at a time, Rational doesn't support it in the web interface, and there are hard limits as to the number fields that can be in the schema.

Table of Contents





Set a test database.
If an appropriate test database does not already exist, without any schemas open for edit in the CQ Designer, go to Database -> New Database and create one associated with the schema to be tested.
Each schema that is edited should be tested in a database not being used for real data. Once the schema is opened, Database -> Set Test Database... The test database needs to already be associated with the current schema to show up in the selection list. Unfortunately, I do not know at this time if it's possible to save the Test Database definition across sessions. I.E. The Set Test Database goes away when you exit the Designer.

WARNING: There is a bug at this point. Initially when the Test Database window comes up, the Properties button is ghosted out. If one pulls down the selection list and selects a database, the Properties button becomes active. If one selects the "null" position in the list, the Properties button becomes active anyway. If you click on that button at this point, it will crash the Designer (see RAMBU00009144). In summary, don't try to get the database Properties of nothing.

Table of Contents





Toggle the "Visible to designer only" switch.
A database can be visible to the CQ Designer only for testing purposes or during initial development. However, if one then wants to make the database visible to CQ, in the Designer -> Database -> Database Properties... -> select the database -> Properties.

Table of Contents





Remove a database.
In the CQ Designer, Database -> Delete Databases... -> select the database -> Delete. However, this will only sever the link between the physical user database itself and the schema repository (master database). The link can be re-established at any time using the Undelete option. However, they cannot be relinked if the user database is not in its original location, CQ has been upgraded or the schema version to which it was originally attached has been deleted. To physically delete SQL Anywhere or Access, simply remove them with standard Windows commands. See the Oracle documentation to physically remove one of those.

Table of Contents





Set up SQL Anywhere.
1) Install Sybase SQL Anywhere Database Server. It comes bundled with CQ and Rational Solutions for Windows. Choose the Administrator option.
2) Open the Windows User Manager. If not already done, create an account under which SQL Anywhere will run. Most likely it will have a name such as sql_admin. Be sure to deselect "User Must Change Password at Next Logon" and ensure "Password Never Expires" is selected.
3) Open Policies -> User Rights... and select Show Advanced User Rights. Select the checkbox "Show Advanced User Rights. In the Right box, select "Access this computer form network". Click Add and for List Names From, select the domain to which the SQL Anywhere belongs. Click Show Users and select the SQL Admin name. Click Add and OK. In the Right box, select "Log on as a service". Repeat the above steps for adding the SQL Anywhere admin. Repeat the steps again for "Log on locally. Click User -> Exit.
4) Create an SQL Anywhere server. From the Start menu, open Sybase SQL Anywhere Database Server -> Rational Administrator. Select File -> New SQL Anywhere Database Server. Give the server a name unique to the network consisting only of alpha-numeric and underscore characters. Type in the amount space for caching SQL data. Allow 2048KB for each Rational repository on the server. Next, select one or more protocols used to communicate with the SQL server. To reduce connection time, use only those protocols that are actually used on the network; most likely TCP/IP. Next, select the Startup option; most likely Automatic. Next, at the point where you type in the Account, make sure you have selected Other Account and that the login name is in the syntax of domain\sql-login. If this SQL server is not on a domain and the sql-login is a local account created in step (2), simply replace the domain name with the local server name. Next, select Finish.
5) Create a share to hold the SQL databases. Place it on the server running the service created in step (4).
You are now ready to create SQL Anywhere databases via CQ. When creating a new database, at the point where it asks for "Database Server Name", it wants the service account name created in step (2), not the server name created in step (4).

Table of Contents





Determine the schema used by a user database.
Updated: 02/24/09
In the Designer, select View->Database Summary.

Table of Contents





Upgrade a user database.
"Test" user databases are updated when a schema is modified and the work is tested using that database. However, the schema can go through many revisions before updating the production user database.
Once the schema is checked back in for the final time, update each user database that uses it. In the Designer, start Database->Upgrade Databases. Choose the database to be upgraded. It will show the current schema version associated with that database. Next, choose the schema revision to update the user database with. If there is more than one new schema version, as of CQ2001, you can simply choose the latest schema rev and all changes will be applied, otherwise you must upgrade the user database with each revision separately in order.

Table of Contents





Logically delete a db if the physical db is gone.
Updated: 01/18/13
Version: 8.0.1.09
Normally, the steps to delete a database from CQ are to first logically delete the database name from the schema repository via the Designer and then remove the physcial database from the system.
If the physcial database was removed first, restore the deleted physical database from backup and then use the Desiger to delete it logically. However, if that isn't feasible, the following steps can be used.

WARNING! Misuse of the following commands can corrupt your schema repository. Use them only as a last resort. Ensure you have a proper backup before using these commands. Ensure nobody is logged into the schema repository.

Using your DBMS vendor's SQL tool or the CQ provided "pdsql", open an SQL session to the schema repository and do the following. These command examples were written for an MS_ACCESS schema repository. For other vendors and related options, type "pdsql -help". Don't forget the semicolon at the end of each SQL command. Note that the following command is run logged into the MASTR database.
  # pdsql -v access -db \\machine\share\schema-repository-name -u admin-login -p admin-password


These steps will actually remove the user database name from the MASTR.

> select master_dbid from master_dbs where name='user-database-name';

master_dbid
   dbid

WRITE DOWN THE DBID!

> delete from master_dbs where name='user-database-name';
1 rows affected.

> delete from master_links where to_master_dbid = dbid and link_type in (3,4);
n rows affected.

> quit;


This step will logically remove the user database name from the MASTR.

>update master_dbs set is_deleted = 1 where name = 'user-database-name';
1 rows affected.

>quit;

Log into the Designer and verify that the database has been deleted.

Table of Contents





Rename a database set connection name.
Updated: 08/14/06
In CQ 2002 or later, open the Rational ClearQuest Maintenance Tool. In the left pane right-click on the database set to be renamed and select Rename.
From the CLI:
  # installutil renamedbset old-dbset-name new-dbset-name
Note that if the web interface is used, the web services may have to be restarted to pick up on the new name.
Prior to CQ 2002 the database set name must conform to the naming convention such as "2002.05.00" if CQ is to be integrated with other Rational Suite products.

Table of Contents





Toggle a database between Test and Production modes.
If a db is designated as a Test database, when users log into CQ they won't see that db listed as a choice. However, you can type the name in by hand to access it. A db needs to be designated as Test to be used as a test database in the Designer.
You can change the mode, Test or Production, for a database at any time without issue. In the Designer, go to Database -> Update User Database Properties. Select the db to be redesignated and click the Properties button. You can toggle the mode at the bottom of the resulting page.
NOTE: To connect a CC UCM project to a CQ database, the database must be in Production mode. However, if you want to run tests on a UCM enabled schema while still logged into the Designer, the user database must be in Test mode. The only way to test a UCM enabled schema that is connected to a CC UCM project is to have "test" user database in Production mode. This means that you can't simply hit the Test Work button. You'll need to check the schema in and upgrade the appropriate user database, and log into CQ the normal to test the changes.

Table of Contents





Move a schema repository's physical database.
Updated: 08/23/17
Version: 8.0.1.14
The physical location of a schema repository can be moved to any network accessible location. However, it doesn't absolutely have to be a UNC path if working on a stand-alone machine. Copy the schema repository database to its new home.
If moved to a new server, the old server name is embedded in the database and needs to be updated. In the ClearQuest Maintenance Tool,

As an alternative in MS Access, simply copy the schema repo and all the databases (perhaps zip them up) and move them to the new location. You'll need to change the path, which is written into the .mdb file.
1) Double-click on the schema repo .mdb file to open it in MS Access. Don't upgrade it and ignore the message about things being "unsafe".
2) In the database tables list, double-click on "master_dbs".
3) Under the column entitle Database Name, change the path to the schema repo and all its user databases to the global share path at the new location, then simply close Access. Even though it doesn't prompt you to save the changes, they get saved.

Table of Contents





Name a dbset (master database).
When assigning names to dbsets, follow these rules:
If you have only one dbset, you can name it as you want.
If you have multiple dbsets, the dbset that is to be associated with the integration should, if possible, be named the CQ version that you are using; for example, 2003.06.00.
If you have multiple dbsets and none of them can be assigned the name of the CQ version string, use the procedure that follows for naming the dbset that is to be associated with the integration.
1) Stop the clients.
2) Start the Windows registry editor.
3) Navigate to HKEY_LOCAL_MACHINE\Software\Atria\ClearCase\CurrentVersion.
4) Click CurrentVersion, click Edit > New > Key, and type ClearCase Squid. You should now have the new registry key, HKEY_LOCAL_MACHINE\Software\Atria\ClearCase\CurrentVersion\ClearCase Squid.
5) Click ClearCase Squid, click Edit > New > String Value, and type DBSet in the Value Name field of the Edit String properties sheet.
6) Type the name of the master database in the Value Data field on the Edit String properties sheet. You should now have a single string value named DBSet with the database name as its data.
7) Restart the clients.

Table of Contents





Set the schema repository data code page value.
Updated: 05/12/11
The following discussion is fairly involved. Do not take the changing of the schema repository data code page value lightly. Refer the the Administrator's Guide for an entire chapter on the subject.
CQ has a setting called the "ClearQuest data code page" that is specified for each schema repository. All user databases associated with a schema repository use the same CQ data code page value. This value enforces a single code page for the database set and prevents characters not in the selected code page from entering the databases. In other words, if a user database is set to accept English characters, it won't allow you to enter data that contains, for example, Simplified Chinese. If you don't set the data code page value, the default value of ASCII is used and your user databases are limited to ASCII (printable English characters) data entry only. See the section called "Guidelines for Selecting a ClearQuest Data Code Page Value" in the Administrator's Guide.
NOTE: On Unix, the CQ client only supports ASCII. Customers with Unix clients must opt to either set the CQ data code page to ASCII (recommended) or require users on Unix systems to enter data only with the Web client. However, if you use non-ASCII characters for the names or properties when you create a new schema repository and sample user database, CQ automatically sets the CQ data code page value to the operating system code page of the system running the Maintenance Tool. The automatic setting of the CQ data code page occurs only when you create a schema repository. Each database (schema repo and user) vendor must support the data code page chosen. See the Admin manual about "Setting the Vendor Database Character Set".

WARNING: If you have CQ databases that were created with versions of CQ prior to 2003.06.00, they may contain data from a variety of code pages. When you set the CQ data code page, the data in your databases are not converted to characters in the selected code page. If your database contains characters that do not map to the new code page characters, data corruption (characters will be set to "?") will occur. Before you set the CQ data code page of an existing schema repository, you must convert the data in your vendor databases to the correct code page characters.
To find the where any non-conforming characters are, see the codepageutil analyze_tables command.
To see if the database can (already) handle characters for a codepage set, see the codepageutil test_codepage command.

If you do not want to enforce a single code page for a schema repository, you can set the ClearQuest data code page to NOCHECKING. When you use this option, CQ does not verify that the data you enter can be stored in the database or displayed by CQ clients without being corrupted. Rational does not recommend that you set the CQ data code page to this value. CQMS does not work if the NOCHECKING value is specified.
If you set the CQ data code page to a non-ASCII value, users can only modify data in that database from a Windows client running the same operating system code page. If the code pages do not match, the database is opened in read-only mode. However, Unix Client users cannot open a database that has a non-ASCII value. Unix users can access a non-ASCII database via the Web interface, but the operating system on which they are sitting must be the same as the data code page value. If the OS data value doesn't match the CQ db data value, the users will not be able to access the db via the web, even in read-only.
As of CQ 2003.06.00 all entered data is validated, with the exception of date inside attachements.
You can always change the value away from the default 20127 (ASCII) to a non-ASCII, but if you make any other combination of change, data corruption may result. If you change from or between non-ASCII values and you are using CQMS, incorrect data may also be in your oplogs. Therefore, Rational recommends that you remove all replicas, clean the database at the mastering site, scrub the oplogs, and then re-create your replicas. See the Admin manual for details.
For integrations with CQ, the integrated tool must be running on a system whose code page value is the same as that in the schema repo, otherwise the schema repo value must be set to ASCII. Note that Rational integrated products do not do a character validation check when the user is entering data. The check is only performed whent that data is "pushed" to the CQ db.
For CQMS, all schema repos at all sites must be at the same value or synchronization imports will fail.

Set the data code page value on a schema repo.
Before doing this procedure, ensure ALL users are logged out of the schema repo and ALL user databases associated with that schema repo. The code page value can changed within the CQ Maintenance Tool (as of 2003.06.13) or via the CLI. Choose an appropriate data code page value. Ensure it's supported by your db vendor. See the Admin manual.
1) In the CLI, cd to the CQ install directory, most likely "C:\Program Files\Rational\ClearQuest".
2) Run the following command to determine the current code page value of your schema repo and operating system. Note that it's normal for a standard English install of an operating system to return a code page that is (ANSI Latin - I). Yes, you want CQ db to be an ANSI character set instead of the simple default of ASCII.
  # installutil lscodepage -dbset dbset admin-user admin_password
Ex:
  # installutil lscodepage -dbset 2003.06.00 cqadmin rational
3) Run ONE of the following. To force an unsupported code page value, or to set NONCHECKING, see the Administrator's Guide.
  # installutil setdbcodepagetoplatformcodepage -dbset dbset admin_user admin_password
-or-
  # installutil setdbcodepagetoascii -dbset dbset admin_user admin_password

Table of Contents





Interpret the values of ratl_priv_mask in a backend db table.
In the backend user database users table there is a column called ratl_priv_mask. Each value is a combination of the different privileges that each user has. Here are the list of the most commonly used privileges.
ratl_priv_mask
0 SU
1 AU+DLA
2 AU+DLA+PFA
11 AU+DLA+PFA+SE
27 AU+DLA+PFA+SE+UA
31 AU+DLA+PFA+SE+UA+SA
16 AU+UA
28 AU+SA
8 AU+SE
0 AU+SD
17 AU+DLA
19 AU+DLA+PFA
25 AU+DLA+SE
Where: AU+Allusers/Groups :Active User
DLA :Dynamic List Administrator
PFA :Public Folder Administrator
SE :Sql Editor
UA :User Administrator
SD :Schema Designer
DA :Security Administrator
SU :Super User
ALL :All Users/Groups Visible.

Table of Contents





Rename a database.
No, it isn't possible rename a user database. Even if you exported all the existing tickets and create a new database, the imported tickets would get a new set of numbers. However, you could keep the old ticket IDs in a new field for cross reference.

Table of Contents





Enable multiline text searches in Oracle: CLOB.
Updated: 02/23/06
If you attempt to run a query that looks for text inside a multiline text field, by default it won't work. If Oracle is the user database vendor, the following link has the procedure to set it up.

http://www-1.ibm.com/support/docview.wss?uid=swg21124348

Table of Contents





Unlock a user database or schema repository.
Updated: 04/03/06
A user database is locked by CQ at various times in the process. If a database needs to be unlocked, perhaps because it is still locked after a restore, access the db as an admin and perform the following. You will need the instance admin account name and password.
See: https://www-304.ibm.com/support/docview.wss?uid=swg21133810

From the CLI:
> installutil unlockschemarepo 
-or-
> installutil unlockuserdb dbvendor server db dbologin dbopassword connectoptions

Ex:
> installutil unlockuserdb MS_ACCESS mycomputer \\mycomputer\share\userdb.mdb admin "" ""
Directly in SQL:
sql> select db_locked from dbglobal;

sql> update dbglobal set db_locked = 0;

Table of Contents





Upgrade a user database programmatically.
Updated: 06/08/06
There may be a more straight forward way to accomplish this, but this works.

#################################
# Push the lastest changes to the destination db.
print "\nLogging into the destination dbset ($dest_dbset) ...\n";
$adminSession = CQAdminSession::Build;
$adminSession->Logon($dest_login,$dest_passwd,$dest_dbset);

# Push the changes.
$databaseObj	= $adminSession->GetDatabase($dest_db);
$schemaRevObj	= $databaseObj->GetSchemaRev;
if ( int($schemaRevObj->GetRevID) < $end_ver ) {
	print "Upgrading ($dest_db) to version ($end_ver) ...\n";

	$schemasObj	= $adminSession->GetSchemas;
	$found_schema	= 0;
	for ( $x = 0; $x < $schemasObj->Count; $x++ ) {
		$schemaObj	= $schemasObj->Item($x);
		$schemaName	= $schemaObj->GetName;
		if ( $schemaName eq $schema_name ) {
			$schemaRevsObj	= $schemaObj->GetSchemaRevs;
			$nrevs		= $schemaRevsObj->Count;
			if ( $end_ver > $nrevs ) {
				print "ERROR: The schema repository ($dest_dbset) only has ($nrevs) revisions for the ($schema_name) schema.  Cannot upgrade ($dest_db) database to version ($end_ver).\n";				goto FINISH;
			}
			$schemaRevObj	= $schemaRevsObj->Item($end_ver - 1);
			$found_schema	= 1;
			last;
		}
	}

	if ( $found_schema ) {
		if ( ! $preview ) {
			$databaseObj->Upgrade($schemaRevObj);
		} else {
			print "Database ($dest_db) not upgraded in preview mode.\n";
		}
	} else {
		print "ERROR: Unable to find a schema named ($schema_name) in the destination schema repository ($dest_dbset).\n";
		goto FINISH;
	}
} else {
	print "Destination db ($dest_db) version is up to date.\n";
}

CQAdminSession::Unbuild($adminSession);

Table of Contents





Configure (or determine) a schema repository LDAP authentication.
Updated: 10/24/18
The ability to do LDAP authentication was provided in 2003.06.15. There is a manual associated with that release that has a whole chapter on LDAP. It should be consulted before attempting these commands. It is highly recommended that you setup and test LDAP authentication on a test schema repo first. In fact, there are so many options to these commands, you must read that manual. That manual also contains a useful survey that can be handed to the LDAP administrator ahead of time. Once the schema repository is enabled for LDAP, users can be enabled on an individual basis. Even after enabling for LDAP authentication, users still need to have a "users" record defined.
If CQMS is involved, note that the LDAP configuration is replicated, but can be modified locally. Note also that the all the following "set" commands have a corresponding "get" command as well.

# This command ensures users don't attempt LDAP authentication during the upgrade.
  installutil setauthenticationalgorithm schema-repo admin passwd CQ_ONLY

# This command sets the string that connects the CQ schema repo to the LDAP server.  The -h option sets the
# primary and secondary LDAP servers.  The -p option sets the port number.  If the LDAP servers don't allow
# anonymous access, the -D and -w options are required.  See the manual for those and other options.
# The -w option is a password known to the LDAP administrators.
# You'll have to work with the LDAP team to know/understand what "cn" and "dc" paramaters to use.

  # Anonymous LDAP login
  installutil setldapinit schema-repo admin passwd "-h ’primary-server secondary-server’ -p port"

  # Log into LDAP using a designated service account
  installutil setldapinit schema-repo admin passwd "-h ’primary-server secondary-server’ -p port -D cn=search_user,cn=Users,dc=svc-account,dc=com -w svc-account-password"

# This command sets up the LDAP search criteria.  The -s option tells it search the subtree.  The domain name is
# broken up in the -b option's "dc" parts.  For example, ent.wfb.bank.corp would be "dc=efc,dc=wbd,dc=bank,dc=corp".
# Note that "%login%" is literal.  That will be filled in by CQ.
  installutil setldapsearch schema-repo admin passwd "-s sub -b ou=dept,domain-components (&(objectCategory=person)(samAccountName=%login%))"

# The following command sets the LDAP mapping criteria.  In this example, LDAP will authenticate against the user's login.
# Other choices are specified in the manual.  Note that whatever field is chosen for the map must have unique values
# across all LDAP-authenticated users in CQ.  If CQMS is employed, this field must be the same across all sites.
  installutil setcqldapmap schema-repo admin passwd CQ_LOGIN_NAME %login%

# The following validates that the above commands set the schema up correctly.
# This will return the current set of paramaters and should not return any error messages.
  installutil validateldap schema-repo admin passwd test-user test-user-passwd

# The following sets CQ to look for the user in CQ.  If the user is found and set to authenticate agains LDAP,
# or if the user isn't found, it will authenticate against LDAP.  Otherwise, the system will authenticate against CQ.
  installutil setauthenticationalgorithm schema-repo admin passwd CQ_FIRST

You can turn LDAP on and off without undoing and redoing everything. See getauthenticationalgorithm and setauthenticationalgorithm
See also Set users up for LDAP authentication.

WARNING: Multisite considerations.
1) The connection parameters are propagated to the remote sites.
2) The parameters can only be set at the working master site.
3) A user's authentication mode is the same at all sites.

Debugging
If there are LDAP issues afterward, set up debugging. It's also possible to get CQ core tracing set up. Contact IBM Support for help with that.
1) Create the following system environment variables:
LDAP_DEBUG_FILE=C:\path\to\file.txt
LDAP_Debug=65535
2) Run the client as a user that doesn't have trouble authenticating, then as a user who is having trouble.

Table of Contents





Add/remove users to/from LDAP authentication.
Updated: 09/13/06
Before a user can be enabled for LDAP authentication, the schema repository must be enabled first. See Configure a schema repository for LDAP authentication. Once the schema repository is enabled, users can be connected to LDAP on a case-by-case basis.
As part of the 2003.06.15 release, in the User Administration Tool, each user record now has a checkbox at that bottom to enable LDAP. Checking that box will have CQ authenticate against LDAP. The other box "LDAP Login" is there if the user has a different LDAP login than CQ login. Leave it blank if you want the authentication to use the CQ Login.
Users can be programmatically updated as well.
  $adminSessionObj = CQAdminSession::Build;
  $adminSessionObj->Logon($admin-login,$admin-passwd,$dbset);
  $userObj = $adminSessionObj->GetUser($login);
  $userObj->SetLDAPAuthentication($login);
  CQAdminSession::Unbuild($adminSessionObj);
The users record also has a field called "LDAP Login". That field is not part of the "users" record type. I didn't verify this functionality, but apparently if when you enable a user for LDAP authentication, if you type in the user's LDAP login name there, it will immediately validate that user. If the validation succeeds, it will place that value in the CQ field that was designated as the setcqldapmap field, such as CQ_LOGIN_NAME.

See also Configure a schema repository for LDAP authentication.

To remove a user from LDAP authentication, in the User Administration tool, simply uncheck the LDAP box. Note that the password that was there prior to when the user was added to LDAP is no longer there. The password must be reset. However, you cannot simultaneously turn off LDAP and set the new password. For an unknown reason, you must turn off LDAP, click OK for that user, then access that user again and set a new password.

Table of Contents





Basic SQL.
Updated: 06/20/18
WARNING: Because there are several intermediate tables that hold entity and field definition IDs and such, you should NEVER insert or delete fields, records, or tables outside of a CQ tool. CQ will break!!

Note that pdsql will accept batched input from a file or send output to a file, as in:
pdsql -v db2 -db cqud01 -u cquadmd1 -p 123456 -s rsdrtl01 -co PORT=64011 < C:\temp\SQL_commands.txt > C:\temp\SQL_command_results.txt

The following are some basic SQL commands. If you want to read data through a REFERENCE (parent/child) field, read up on LEFT OUTER JOIN. CQ uses multiple intermediary tables to link records, so is beyond the scope of these examples. Note that while the SQL commands below are in all caps, they can be written in all lower-case as well.
-- Display field values:
	SELECT field1, field2, field3
	FROM recordtype;

-- Filter on field values:
	SELECT field1, field2
	FROM recordtype
	WHERE field1 = 'value'
If the value contains a single quote, enter it as a double-single quote.
For example, "Eric's" would get entered as 'Eric''s'.
Numerical comparisons don't need quotes, as in "WHERE field1 > 100".

-- Filter on mulitple fields:
	WHERE field1 = 'value' AND field2 < 1000

-- Filter on multiple values (or) within a field:
	WHERE field1 IN ('value1','value2','value3','value4')

-- Field starts with "valu":
	WHERE field1 LIKE 'valu%'

-- Field contains "valu":
	WHERE field1 LIKE '%valu%'

-- Escape single quotes (if string = "Bob's", double up the internal quote):
	WHERE field1 = 'Bob''s'

-- Filter on a range of values (field1 is an INTEGER):
	WHERE field1 BETWEEN 1000 AND 2000

-- Group nested filters (field2 is a SHORT_STRING):
	WHERE field1 BETWEEN 1000 AND 2000
	AND
		(field2 BETWEEN '00004040' AND '00005050'
		OR
		field2 BETWEEN '00006000' AND '00006050')

-- Field does not equal a value:
	WHERE field1 <> 'value'

-- Find out if a reference field is empty or not:
	WHERE field1 is NULL
	WHERE field1 is not NULL

-- Field does not contain a value:
	WHERE field1 NOT LIKE '%valu%'

-- Sort in ascending (ASC) or descending (DESC) order:
	WHERE field1 <> 'value'
	ORDER BY field2 ASC

-- Sort multiple fields in the order in which they are listed:
	ORDER BY field1 ASC, field2 DESC

-- Query on dynamically determined values (select in select):
	SELECT field1
	FROM recordtype
	WHERE field1 >
		(SELECT field2
		FROM recordtype
		WHERE field3 = 'value')

-- Mathematically analyze a set of returned values with AVG, COUNT, MAX, and SUM.
These four will return a single value unless GROUP BY is used.
	SELECT SUM(field1)
	FROM recordtype

-- Group a set of returned values.  This will return the sum of "field2"s for each
different value of field1.
	SELECT field1, SUM(field2)
	FROM recordtype
	GROUP BY field1

-- If a returned row is identical to another, only return one of them:
	SELECT DISTINCT field1
	FROM recordtype

-- Assign each table (recordtype) a variable when more than one table is involved:
	SELECT T1.field1, T2.field2
	FROM recordtype1 T1, recordtype2 T2
	WHERE T1.dbid = T2.parent_dbid

-- Change a field value.  Note that "UPDATE"s are not allowed in CQ pdsql.
	UPDATE recordtype
	SET field = 'newvalue'
	WHERE field = 'oldvalue'
   If the new value is '', use "SET field = NULL".

-- Change a field substring.  Note that UPDATEs are not allowed in CQ pdsql.
	UPDATE recordtype SET field = REPLACE(field, '/old/', '/new/')

-- Get a listing of all tables:
	SELECT * from cat;
In pdsql:
	tables;

-- Get a listing of all columns in a table.
	DESC tablename;
In pdsql:
	columns tablename;

-- Change the state of a record.  No hooks are run when doing this.
	SELECT id,name FROM entitydef WHERE name = 'recordtype';   # this gets the entitydef id
	SELECT id,name FROM statedef WHERE entitydef_id = entitydefid  # this gets the state id
	UPDATE tablename SET state = newstateid WHERE id = 'recordid';

-- Delete a row.
	DELETE FROM tableName WHERE columnName=value
WARNING: If the "where" clause is not correct (does not uniquely select a row), it may delete ALL rows.

-- Determine the current DB2 database name
	SELECT CURRENT_SERVER from sysibm.sysdummy1;

-- Get a db table name for a CQ record type.
	SELECT db_name FROM entitydef WHERE name = 'record-type-name';

-- Get a db column name for a CQ field.
	SELECT id FROM entitydef WHERE name = 'CQ-record-type';
	SELECT db_name FROM fielddef WHERE entitydef_id = 'entitydef-id' AND name = 'CQ-fieldname';

-- Just get a count of rows returned.
	SELECT count(id) FROM tablename;
Table of Contents



Determine the next dbid that will get used.
Updated: 07/27/11
New in CQ 7.0, the installutil can return information about dbids. The output will need to be parsed if peforming the "get" programmatically.
  installutil getrecordlimits -dbset dbset  username  password  { user_db | -all }
Note that a dbid gets consumed for stateful and stateless records even if the Submit action is not committed.

Table of Contents





Increase the record ID (dbid) limit.
Updated: 09/08/06
New in CQ 7.0, the installutil command has the subcommand "sethighrecordlimit" for Windows only. The command has no affect if the db is already at the higher limit.
  installutil getrecordlimits -dbset dbset  username  password  { user_db | -all }
Table of Contents



Rational ClearQuest Diagnostics utility.
Updated: 05/12/11
The Rational ClearQuest Diagnostics utility was introduced in CQ 7.0. The utility examines your schema repository and user databases to identify conditions that might cause integrity or performance problems. On Windows, it's Perl script: CQ-home\diagnostic\cqdiagnostics.pl.
Before editing the diagnostics PERL script, make a copy of the original. Give the copy a name that describes the specific configuration. By doing this you can have multiple diagnostic PERL scripts that don't have to be reconfigured. Also, because these files are in the CQ directory tree, you should keep your configured files under source control, just in case CQ ever gets uninstalled and reinstalled.
1) Edit "cqdiagnostics.pl". Set all the variables in sections 1 thru 4. Note that the paths specified in section 4 for must already exist. Also, those files are not written for the "describe" subcommand; that information is sent to STDOUT.
2) On the CLI, change to the ClearQuest\diagnostic subdirectory. Execute the PERL script using cqperl.

NOTE: I ran the validatedb against a large userdb on a decent server and it took almost two hours. Moreover, the resulting text file was ~1.5GB; too large to be opened in Notepad.

NOTE: The describe subcommand simply lists the rules the engine will evaluate; it doesn't describe your database. Those rules are defined in CQ-home\diagnostic\configuration\rules.xml, but that's pretty advanced CQ.

Table of Contents





CQ feature levels
Updated: 04/27/11
A schema repository has a feature level and metaschema version. A feature level installs functionality at the database level, as opposed to the schema level as a package.
Determine the current metaschema version and feature level using pdsql:
	> select name, feature_level, metaschema_version from master_dbs;
Starting in CQ 7.1 you can get a listing of feature levels supported by the current CQ version:
	installutil showfeaturelevels 
To upgrade a feature level, see the full instructions in the install manual / release notes.

Table of Contents





Use pdsql.
The pdsql command lives in CQ-home and is used to modify CQ databases. There are many SQL operations that can be run to alter a database, but only the ones in pdsql are supported. Alwyays end pdsql commands with a semicolon.
  pdsql  -u db-owner  -p password  -v ss  -db userdb-name  -s server

MS Access:
  pdsql -u admin -p password -v access -db full-unc-path-to-schemarepo.mdb
The -u option is the owner of the SQL database, independent of any instance. The -p option is the owner's password. The -v option is type of database, such as ss = SQL Server. The -db option is the name of the user database to be accessed. The -s option specifies the hostname of the server on which the SQL database resides.

Table of Contents





Update schema repository after copying new data.
Updated: 07/12/07
If you make a copy of a database, perhaps to set up a testing area, be sure to update the schema repository properties.
If you create a new connection for it in the Maintenance Tool on a brand-new machine, you still need to run Schema Repository -> Update -> Selected Connection. This is done because while the external connection properties (stored in the registry) may get changed, the internal row data will not.

WARNING: Updating the selected connection is especially important if you copy tables into an existing CQ database. The maintenance tool will show you it is connected to the existing CQ database (the one you copied the data into). But, if you log into that schema repo, perhaps to update the user database connection, because the internal row data has not been updated, you are actually erroneously/unwittingly logging into the database from which the CQ tables were taken. Be safe and update the schema repo connection even if you think it is already ok.

Table of Contents





Register a schema repository.
Updated: 05/11/11
After installing CQ, open up the ClearQuest Maintenance Tool. To manually enter connection data, select Connection -> New. To import saved connection information, select File -> Import Profile. If you manually enter the connection information, be sure to save it from File -> Export Profile for later use. It creates a .ini file that can be easily imported for other installs or used to automate the connections during install (see siteprep in the release area).
The connection can also be made from CLI:
	installutil clientregisterschemarepo parameters
The "installutil registerschemarepofromfile" seems like it should work from an exported profile, but I can't get it to work.

Table of Contents





Make a copy of a database.
Updated: 05/12/11
If you want to make a copy of a schema repo and user db, perhaps for the purpose of testing, you can have the DBA make the copies for you.
CQ also provides CLI tools to do it, but I have no personal experience with these: installutil convertschemarepo
installutil convertuserdb

Table of Contents





Generate a summary of database details.
Updated: 05/26/11
The following will output the details of the specified schema repo and its associated user databases. You must be logged with Administrator rights.
$repo			= "Development";
$adminSession = CQAdminSession::Build();
$adminSession->Logon($login,$passwd,$repo);
@DB_VENDOR		= ('Unknown', 'SQL Server', 'MS Access', 'SQL Anywhere','Oracle', 'DB2');
$databases_entity	= $adminSession->GetDatabases();
for ( $i = 0; $i < $databases_entity->Count(); $i++ ) {
	$database		= $databases_entity->Item($i);
	$db_name		= $database->GetName();
	$db_vendor		= $DB_VENDOR[$database->GetVendor()];
	$db_host		= $database->GetServer();
	$db_sid			= $database->GetDatabaseName();
	$db_login		= $database->GetDBOLogin();
	$db_password		= $database->GetDBOPassword();
	$connect_options	= $database->GetConnectOptions();
	$schema_rev_entity	= $database->GetSchemaRev();

	if ( "$db_name" eq "MASTR") {
		$schema_name	= 'N/A';
		$schema_version = 'N/A';
	} else {
		$schema_name	= $schema_rev_entity->GetSchema()->GetName();
		$schema_version = $schema_rev_entity->GetRevID();
	}

	print "
Schema repo:\t$repo
Database:\t$db_name
Schema:\t$schema_name
Schema ver:\t$schema_version
Vendor:\t$db_vendor
Hostname:\t$db_host
SID:\t$db_sid
DB login:\t$db_login
DB password:\t$db_password
Connect options:\t$connect_options\n";

CQAdminSession::Unbuild($adminSession);
}
Table of Contents



Get a programmatic list of user databases.
Updated: 05/26/11
Version: 7.0.1.8

Once logged into a schema repository via an admin session, you can get a listing of user databases, perhaps to push changes out to each.
	$databases_co	= $adminSession_o->GetDatabases;
	$n_databases	= $databases_co->Count;
	for ( $x = 0; $x < $n_databases; $x++ ) {

		$db_o	= $databases_co->Item($x);
		$db	= $db_o->GetName;

		...
Table of Contents



Determine the database set if you know the user database.
Updated: 03/15/13
Version: 7.1.2

The user database is often known because of a ticket id or from a UCM project properties. However, logging into the database can be tricky, because while the user database is hard-coded into the system at creation, the database set name can be anything on a user's workstation.
The database set can be determined by inspecting the Windows registry, either manually or programmatically. The information is stored in "HKEY_LOCAL_MACHINE\Software\Rational Software\ClearQuest\cq_version\Core\Databases\database_set".

Table of Contents





List the database set(s) known to a Unix server.
Updated: 12/14/15
Version: 7.1.2.14

	cqreg show
Table of Contents



Add a schema repository (dbset) connection to a Unix server.
Updated: 03/09/16
Version: 7.0.1

On Unix, dbsets are usually listed in "/opt/rational/clearquest/CQDB_rgys/cqdb_registry".

	cqreg add_dbset -vendor vendor -server server-name -database dbset -u db-user -p db-password "connect-options"

Example:
Database = schema repository (dbset) name

	cqreg add_dbset -vendor ORACLE -server oracle01 -database MUOS -u Oracleuser -p Oraclepass "SERVER_VER=9.2;SID=MUOS;HOST=oracle01;LOB_TYPE=CLOB"
Table of Contents



Remove a schema repository (dbset) connection from a Unix server.
Updated: 05/18/16
Version: 2003.06.00

On Unix, dbsets are usually listed in "/opt/rational/clearquest/CQDB_rgys/cqdb_registry".

	cqreg drop_dbset -dbset dbset-name -force
Table of Contents



Programmatically determine a local dbset name if you know the underlying SID.
Updated: 06/24/16
Version: 7.1.2.14

On Windows, a user can rename a schema repository to whatever they want in the Maintenance Tool and it will still work.
Unfortunately, if writing a robust script that needs to log into CQ, you need to know the local name for the dbset.

This can be done programmatically.
1) Determine the SID of the dbset. Look in the Maintenance Tool properties for the dbset and note the SID.
2) Using Perl Win32::Registry, look in "HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Rational Software\ClearQuest\current-version\Core\Databases\dbset\MASTR" and you'll see the key called "Database". That name is set at the database level, so cannot be changed on a workstation. Note that "Wow6432Node" may or may not be in the key path depending on your operating system version.
3) Search each dbset defined in the registry and return the one that matches the correct SID. It's ok if the user has the same schema repo defined more than once in the Maintenance Tool, both will have the same SID, so both will work.

Table of Contents





View which server a database is associated with.
Updated: 08/23/17
Version: 8.0.1.14

The name of the server where the schema repository or user database is located is embeded (written) inside the schema repository. If a database ever gets moved, that needs to be updated.
In pdsql:
	select * from master_dbs where database_name='db-name';
Table of Contents



Add free-form text to an email rule.
Updated: 05/23/06
You can add any existing field to the email that an email sends out. But, there is no out-of-the-box way to add free-form text, perhaps to tell the user why they are receiving the email. The following is a work-around.
For each string of text that you'd like to send to the user, create a SHORT_STRING or MULTILINE_STRING field to hold it whose Default Value is CONSTANT. However, you can also have the text contain dynamic information, as in the Code_Reuse_Message example below. Don't place the field on any form. Then, simply reference that field as a Display Field in the email rule. Note that a SHORT_STRING defaults to 50 chars long, but can be lengthened when the field is initially defined.

Here are some fields and associated value examples:
Notification_Review
You've been assigned or removed as a Reviewer.
Code_Reuse_Message
The $this_project project has submitted a Problem Report against reused $other_project code.
Nofication_Need_More_Info
This is being sent back to you for more info.
Web_Login_Link
http://webserver/cqweb etc...

To add a URL into another field, see Embed a clickable URL in a field.

Table of Contents





E-mail notification.
Updated: 05/23/06
ALL users must enable their own email notification in the Client; go to View->Email Options. In the web interface, a CQ admin needs to go to Operations, Edit Web Settings.
To get email sent automatically for certain state or field changes, create Email_Rule records. However, even if an automatic email rule is in affect, the end-user still must enable email notification to recieve emails. To export/import email rules, see "import data into CQ".

Generate emails from within the schema instead of with Email Rules.

There are several reasons why you should place email notifications within the schema code. The one downside to having email generated from the schema instead of Email Rules is that you can only make email notification changes when the schema is next released, which may be a couple months at some institutions.
1) Email Rules are inefficient. When you commit a record, all Email Rules fire to determine if they are to send an email. In order to make that determination, an Email Rule may need run a query. If you have several Email Rules, there is a marked performance hit when committing a record.
2) You have much greater flexibility in deciding whether or not an email should be sent. The Email Rule record type is fine for simple rules, but some may take extended logic, which simply cannot be done in an Email Rule.
3) You have complete control over the content of the Subject and Body text, which means that you can place free-form text in the body of the email.
4) No email configuration necessary. You don't have to worry about users of the Client interface making the personal decision to turn off email notifications.
Use the following VBScript snippet as an example of how send email from within the schema. As with Email Rules, the person logged when this fires will automatically be set in the From address. Most often this code would be placed in an action Notification hook. However, at that stage, you cannot modify any fields. For example, in the code below, we are adding a note entry, which cannot be done that late in the commit. For that reason, the code should be placed in Validation hook. But, because there may be errors, you don't want the email sent out until the record validates properly. For that reason, you would place this code at the very end of any other code with an if statement surrounding to check the status of, say, the Defect_Validation variable.
	' ''''''''''''''''''''''''''''' 
	' Send email to the new Tester if that field changed, but only in the Testing state. 
	if GetFieldValue("State").GetValue = "Testing" and GetFieldValue("Tester").GetValue <> GetFieldOriginalValue("Tester").GetValue then 

		set mailObj = CreateObject("PAINET.MAILMSG") 

		address = GetFieldValue("Tester_Email").GetValue 
		mailObj.AddTo(address) 

		subject = "You have been assigned as the Tester for Defect " & GetFieldValue("id").GetValue
		mailObj.SetSubject(subject) 

		body = "Headline: " & GetFieldValue("Headline").GetValue & vbCrLf
		mailObj.SetBody(body) 

		status = mailObj.Deliver

		' Retry if this first deliver failed for some reason.
		if status = 0 then 
			status = mailObj.Deliver 
		end if
	end if

Ensure SMTP server is responding.

Once the system administrator has given you the hostname of the SMTP server, you can verify connectivity via the following. The ping will verify simple name resolution. The telnet will open a window and report something like "220 hostname ESMTP Server...ready". If you get anything resembling and error, then hostname is not "talking" SMTP.
  # ping hostname
  # telnet hostname 25
Debug email notification issues.
New in CQ 2001A, email debug information can be viewed using dbwin32 on Windows. Create registry files for the client and/or webserver machines. You must have local Admin rights to view debug output.
Native Client

REGEDIT4
[HKEY_CURRENT_USER\Software\Rational Software\ClearQuest\Diagnostic]
"Trace"="Email"
"Output"="ODS"
"EMailSendVB"="ODS"

Webserver

REGEDIT4
[HKEY_USERS\.default\Software\Rational Software\ClearQuest\Diagnostic]
"Trace"="Email"
"Output"="ODS"
"EMailSendVB"="ODS"
Stop the ClearQuest service, import the registry files, and restart ClearQuest.

Table of Contents





Lock down Email Rules.
Updated: 05/23/06
If you want to lock down all Email Rules so that only a certain group can create/modify them, use the built-in USER GROUPS option in the Access Control column of the Email Rule Submit and Modify actions.
If you want to set it up so that only a set of Email Rules is not modifiable by users not in a given group (perhaps to lock down administrative-type rules), perform the following steps. In the Name of the Email Rule place in parentesis a comma-separated list of groups that are allowed access. In the Modify action Access Control hook, place code similar to the following. For this and other uses, it's a good idea to have a CQ group of "admins" set up that is independent of the "superuser" access. That group should always have access to do administrative stuff where others cannot.
	' '''''''''''''''''''''''''''''''''
	' This function will check to see if the current logged in user is a part of a group
	' allowed to Modify the Email Rule.  Ensure users in the "Admin" group can get in.
	usergroups	= GetSession.GetUserGroups
	name_text	= GetFieldValue("Name").GetValue
	email_rule_AccessControl = FALSE
	if InStr(name_text,"(") and IsArray(usergroups) then

		string1 = split(name_text,"(")
		string2 = split(string1(1),")")
		allowed = split(string2(0),",")

		for each group in userGroups

			if group = "Admin" then
				email_rule_AccessControl = TRUE
				exit for
			end if

			for each allowed_group in allowed
				if group = allowed_group then
					email_rule_AccessControl = TRUE
					exit for
				end if
			end if
		next
	end if

Table of Contents





Create an Email_Rule record.
Updated: 05/23/06
Email rules are created from the client by users that have permission to modify the schema. Select Actions -> New... -> Email_Rule. If the schema with which you are working does not have the Email_Rule record type, it can be added as a package in the Designer. By filling out all the required information in the Submit Email_rule screen, you are establishing the criteria that triggers email notification.
In the Rule Controls tab, give it a name unique within the current schema, associate it with a record type (most often Defect) and list field(s) to monitor. If one of the monitored field's values changes, an email is sent. In addition, one can use a Public Query as a trigger to send email when the record meets multiple criteria. The check box at the bottom toggles whether this rule is active or not. That is, one can disable the rule without deleting it.
In the Action Controls tab, one can select an action (Actions) or an action type (Action) to trigger the email, or emails can be triggered by moving a record between states (Source and Destination).
In the Display Fields tab, though not mandatory, the email's appearance is configured here.
The To Addressing Info and CC Addressing Info tabs should be self-explanatory. If adding addresses external to CQ into the bottom-right pane, you are limited to 22 chars.
NOTE: To ensure users are able to send email, in their CQ client, select View -> E-mail Options... and configure it accordingly.

Table of Contents





Modify an Email_Rule record.
Updated: 05/23/06
In the CQ client, Edit -> Find Record -> Entity (Email_Rule) & ID (email-rule-name) -> Actions -> Modify. If you don't remember the name of a particular Email_Rule record, run Query -> New Query... -> Email_Rule and simply run a query with Name as the only field displayed.

WARNING: Do not modify the Email_Rule record type itself, as it will break the code that supports e-mail rules.

Table of Contents





Set up the Rational Email Reader.
Updated: 05/24/06
With the Rational Email Reader, you can send emails to CQ with new submissions or modifications to exiting records. There are more details in the Admin Manual, but the following will give you an idea of what is needed.
1) Each user database that is to work with the Rational Email Reader needs to have a dedicated email address. This may be a sticking point at companies that don't like giving out email addresses that do not have a human attached to them.
2) If using SMTP, you're going to need a login and password to access a POP3 server.

Table of Contents





Include attachments with email.
Updated: 04/23/10
Custom emails can be sent from within the CQ schema. The following code will allow multiple attachments to be included. Note that this doesn't contain the required $smtp set up. This code requires "use MIME::Base64". You can send any type of attachment with this.

$boundary = "SOME_VERY_UNIQUE_STRING";
$smtp->datasend("MIME-Version: 1.0\n");
$smtp->datasend("Content-Type: multipart/mixed; boundary=\"$boundary\"\n");

$smtp->datasend("\n--$boundary\n");
$smtp->datasend("Content-Type: text/plain;\n\n");
$smtp->datasend("This is the text body of the message.\n");

$temp = $ENV{TMP} || $ENV{TEMP};
if ( "$temp" eq "" ) {
	die "Must have a temp directory defined";
}

foreach $filename ("file1.xls","file2.doc") {

	# Copy the file from the database to the local disk.
	$full_path = "$temp/$filename";
	$attachment->Load($full_path);

	# Read the file contents, then delete the temporary file.
	open(FILE,$full_path);
	binmode(FILE);
	@buffer = (<FILE>);
	close(FILE);
	$content = join("",@buffer);
	unlink($full_path);

	# Mime encode the file and get it's mimed length.
	$encode = encode_base64($content);
	$length = length($encode);

	# Write the email entry.
	$attachment  = "\n--$boundary\n";
	$attachment .= "Content-Type: application/octet-stream; name=\"$filename\"\n";
	$attachment .= "Content-Disposition: attachment; filename=\"$filename\"\n";
	$attachment .= "Content-Transfer-Encoding: base64\n";
	$attachment .= "Content-Length: $length\n";
	$attachment .= "\n$encode\n";

	$smtp->datasend($attachment);
}

$smtp->datasend("--$boundary--\n");

Table of Contents



Force a user to enable email notification.
Updated: 04/21/11
Version: CQ 7.0.1

In the web interface the email notification enable switch is controlled by an administrator. However, in the fat client the user can disable email notification at any time. But, if emails are key to the process and proper communication, you want to prevent that.
Place code similar to the following in a global script. Call the global script from a BASE action Valiation hook for each record type where email communication is critical.
	if ( ! $session->IsEmailEnabled ) {
		$result = "\n\nERROR: You must have email notification enabled to utilize this record type.\n";
	}
Table of Contents



Programmatically send emails.
Updated: 04/21/11
Version: CQ 7.0.1
NOTE: If sending an email by clicking a button, the record must be in an editable condition for the button to work.

SMTP
Custom emails can be sent by contacting an SMTP server. This is very useful for sending CQ administrators an email if something goes wrong in the schema. Note that this example allows embedded HTML as well.

send_mail("emailaddress\@company.com;address2\@company.com","Subject","The body text allows <u><b>HTML</b></u>.\n");

sub send_mail {

	use Net::SMTP;
	my $smtp_server	= "smtpserver.company.com";
	my $to		= $_[0];
	my $subject	= $_[1];
	my $body	= $_[2];

	my $smtp = Net::SMTP->new("$smtp_server");
	if ( "$smtp" eq "" ) {
		$session->OutputDebugString("send_mail: ERROR: Unable to reach the SMTP server: $smtp_server\n");
		return;
	}

	$smtp->mail("ClearQuest-NoReply\@company.com");
	foreach $address (split(/\;/,$to)) {
		$address =~ s/\s+//g;
		$smtp->to($address);
	}
	$smtp->data();
	$smtp->datasend("To: $to\n");
	$smtp->datasend("Subject: $subject\n");
	$smtp->datasend("Content-type: text/html\n\n");
	$smtp->datasend("\n");
	$smtp->datasend("$body\n");
	$smtp->dataend();
	$smtp->quit;

	return;
}

CQMailMsg
Custom emails can also be sent utilizing whatever email method (SMTP or MAPI) has been set locally in the Windows fat client. On Unix it defaults to sendmail.
	my $mailmsg = CQMailMsg::Build;
	$mailmsg->AddTo("emailaddress\@company.com");
	$mailmsg->SetSubject("This is the subject");
	$mailmsg->SetBody("This is the body text!\n");
	$mailmsg->Deliver;
	CQMailMsg::Unbuild($mailmsg);

Outlook
You can also send an email specifically to Outlook. In this example, the email address is stored in a field on a record and there is a button next to it that calls this global script. CreateItem requires an argument, but it can be any string. This only works in the fat Client interface.

Function Email_This_User (entity_type, entity_id, field_name)

	email_to	= GetFieldValue(field_name).GetValue
	email_subject	= entity_type & " " & entity_id
	email_body	= "This email was sent from ClearQuest."

	Set oOutlook	= CreateObject("Outlook.Application")
	Set oEmail	= oOutlook.CreateItem(dummy)

	oEmail.To	= email_to
	oEmail.Subject	= email_subject
	oEmail.Body	= email_body
	oEmail.Display

end function

mailto:
In a field on a form, you can preface an email address with "mailto:". This will create a clickable link on the record that a user can access even if the record is not in an editable state. If not in an editable state, the subject and body of the email cannot be dynamically set inside CQ, but can be set by the user before sending the email. Simply place a string similar to the following in a text field. Note that special characters, such as spaces, have to be encoded.
	mailto:user@company.com?subject=This%20is%20the%20subject&body=This%20is%20the%20body

Table of Contents





Install the EmailPlus package.
Updated: 01/31/12
Version: 7.0.1.8
The EmailPlus package does not come bundled with the installation, so you won't see it in the Package Wizard.
See http://www-01.ibm.com/support/docview.wss?rs=988&uid=swg24025441

Table of Contents





Basic EmailPlus rule.
Updated: 05/08/12
Version: 7.0.1.8
The following steps will set up a basic EmailPlus rule.
1) Create an EmailPlusConfig record. This tells whether or not EmailPlus is active (entirely), who the admin is, and what configuration is applicable for each CCMS site. While the record is open for edit, submit an EmailPlusSiteConfig by clicking the New button. You can only create EmailPlusSiteConfig records from this record. Create an EmailPlusSiteConfig record for each site in the CCMS family. Yes, you have to create one for the current site even it Multisite isn't being used.
EmailPlus overview: http://publib.boulder.ibm.com/infocenter/ieduasst/rtnv1r0/topic/com.ibm.iea.rcq/rcq/7.1/Operations/EmailPlus.pdf
EmailPlus manuals: http://www-01.ibm.com/support/docview.wss?rs=988&uid=swg24025441
WARNING: For an unknown reason, if you set the Web Server value in the EmailPlusSiteConfig for the purpose of embedding a link URL in an email, you need to log out and back in before that value will get picked up by EmailPlus.
2) Create an EmailPlusTemplate record. Templates define the look and feel of an email.
3) Create an EmailPlusRule record. This is the actual rule that fires when the criteria is met.

WARNING: If an email rule is modified outside of the database session you are currently logged into, you must log out and back in to pick up the rule changes. The session caches rule information. This can occur if you have two windows open; one for editing the rule, and another for transitioning records for testing.

Table of Contents





EmailPlus tags.
Updated: 05/15/12
Version: 7.0.1.8
Current CQ database:
	#@expression::$session->GetSessionDatabase->GetDatabaseName;@#
Action:
	#@expression::$entity->GetActionName;@#
Even though the admin manual gives clear examples about embedding run-time values into an expression, the system tries to interpret the value when the EmailPlusTempate is saved, which leads to syntax errors. This example straight out of the manual throws an "if" syntax error because the State substitutions don't return anything until run time. There's something clearly wrong between the admin manual and reality. I verified the schema has EmailPlus 2.0. In the above expression examples, the variables are able to be evaluated at Apply, even if the wrong value. Then, the system evaluates and picks up the correct value at run time.
	State: #?State?# #@EXPRESSION::if (#?State?# ne #%State%#) { "(Old Value: ". #%State%#.")"; }@#
Table of Contents



EmailPlus Advanced Rule.
Updated: 05/18/12
Version: 7.0.1.8
In addition to the built-in ways to control when an email rule fires, you can create your own complex expressions under the Advanced Rule tab. The system provides a set of built-in functions to get fields values, etc..., plus you can call any global scripts in the schema. You can use variables, but only if you define them in the advanced rule. They provide functions such as Gfv (GetFieldValue) for you because there is no way to get at $entity.
The following are built-in functions:
OneOf($list, $item) This routine takes a list and searches it for the presence of the given item. It returns TRUE if the item is found else FALSE. $list - A reference to an array of string values $item - The value to search the list for
Gfv($fieldName) Simple encapsulation of GetFieldValue which is equivalent to: GetFieldValue($fieldName)->GetValue() $fieldName – The name of the field
Gfov($fieldName) This is similar to Gfv, except it is the simple encapsulation of GetFieldOriginalValue which returns the original value of a field before any changes were made.
Gfvs($fieldName) This function is the simple encapsulation of ClearQuest API call GetFieldValueStatus and is equivalent to: GetFieldValue($fieldName)->GetValueStatus()
FChg($list) Examines the list of field names given and returns a string to indicate what fields have changed: "ANY" means one or more of the fields in the list have changed "ALL" means all the fields in the list have changed "NONE" means none of the fields have changed $list - A reference to an array of field names to check for change.
StoDT($dateString) Converts a date string value from a ClearQuest DATE_TIME field to a UTC date/time in seconds. The return value is the number of non-leap seconds since the epoch. On most systems the epoch is 00:00:00 UTC, January 1, 1970. $dateString - A date string of the format “YYYY-MMClearQuest DD hh:mm:ss”.
DTtoS($timestamp) Converts a UTC Date/Time in seconds into a date string of format YYYY-MM-DD hh:mm:ss. $timestamp – A UTC date/time in seconds.
Note that if anything is added to the Advanced Rule tab, when the record is saved the system creates a helper record called EmailPlusAction, but that record type doesn't have a form associated with it.
WARNING: If you import EmailPlusRule and EmailPlusTemplate records from another database, because Notification hooks don't run during import, the EmailPlusAction records don't get created. So, either import those records as well, or if already imported, simply batch update all EmailPlusAction records. The act of editing the records without actually making any changes will automatically create the required EmailPlusAction records.

Table of Contents





EmailPlus subscribers
Updated: 05/21/12
Version: 7.0.1.8
In addition to pre-defined users receiving emails at pre-defined points, the system allows users to subscribe to email rules themsleves in a couple different ways.

Users can subscribe to an EmailPlusRule such that they will receive an email any time that email fires for any record.
1) On a given EmailPlusRule record, an admin needs to create a subscription list record (EmailPlusRuleSubscription) for each site. This allows any user to subscribe to that rule. The admin can also restrict who can subscribe to a rule by specifying CQ groups in the Permitted Subscriber Groups field.
2) A user then views the EmailPlusRule record, double-clicks on the subscription record reference under the "Subscribers" tab and selects Subscribe.

Users can also receive all designated emails regarding a specific record.
An admin can create an EmailPlusTemplate of type "Special Interest" and create email rules connected to it. On a record, a user can add themself to the Subscriber List. That user will then receive all special interest emails for that single record.

Table of Contents





Set/load default values.
Updated: 01/08/13
Version: 7.1.2
If entering similar data into a Submit form many times over, it is useful to save the common field entries as defaults for the next submission. After you have entered all the common data, select the Values button on the right-hand side and choose "Save as Default". Then, continue to fill out unique field values and submit the form. The next submission you make, select "Load Default" under the Values button and the common fields will be populated with the saved values. As of CQ 2.0, the saved values stay saved forever.
However, if you want to set a permanent default value for all users, you'll need to set up a hook in the Fields matrix under the Default Value column. Default values set with a hook in the schema are only set upon Submit.

NOTE: The information is stored in the Bucket table of the user db. If a user accesses the same schema in a different user db, they will have to set up the defaults in the other db as well.

WARNING: If by setting a certain field value, a secondary action occurs, such as the creation of a stateless helper record that then links back to the record being submitted, be warned that loading the defaults into another new record may cause the original helper record to be linked to the new submission as well, instead of creating a new helper record. Be wary of secondary actions and linked records when working with default values, as they sometimes have undesired and often hidden consequences.
To guard against inappropriate linking between records, it will be necessary to clean up reference lists programmatically in the Value Changed hook. Unfortunately, there isn't any way to hook into the actual Load Default and Save as Default actions. However, the following code implies that the linked record either has a back-reference field that indicates which parent its linked to, or possibly that the parent's id is embedded in the unique id of the child.
	my $current_id = $entity->GetDisplayName;
	foreach my $entry (@{$entity->GetFieldValue($fieldname)->GetValueAsList}) {
		if ( "$entry" !~ /^$current_id:/ ) {
			$entity->DeleteFieldValue($fieldname,$entry);
		}
	}
Table of Contents



Create a new field.
1) In the CQ Designer, open the schema in which the form lives.
2) Open the appropriate record type folder (most often Defect) and double-click on Fields.
3) Right-click anywhere in the right-hand pane and select Add Field...
4) Fill in the Field Name and select an appropriate field Type. The Help Text is optional.
5) The field is created when you click on the close (X) in the upper-right corner of the window. The field's position among other the fields listed can be changed by clicking on the box to the left of the fieldname and dragging the field to its new location. However, I'm unsure at this time how to preserve that location across schema versions. That is, if you check-in and then check back out the schema, the new field will again be a the very bottom. The default behavior for a new field is OPTIONAL.
6) Open the Forms folder for the same record type. In turn, double-click on the "Submit" and "Record" forms. The actual names for those two forms vary between record types.
7) Select the new field in the Field list; a separate window to the right of the CQ Designer. Drag it to the desired location on the form. Its properties and relationship to other fields can be modified via the Form Layout menu. Alternatively, you can first select a field type from the controls palette (also a separate window to the right of the Designer). Associate that field type with an actual field by right-clicking on it and selecting Properties. From there, fill in the Field Name.
8) Validate the schema via File -> Test Work... This will validate the schema in the designated test database. That database is selected via Database -> Set Test Database... If 0 error(s) found, the CQ Client will automatically launch so that you can test your changes in context.
9) If satisfied with the changes, save the schema in the Designer via File -> Check In...
10) Upgrade the real database with the new schema version. Database -> Upgrade Database...

Table of Contents





Set up a parent/child control field.
1) Use steps 1-4 in "Create a new field". The field type is REFERENCE_LIST and the additional field called "Reference To" needs to be filled in; the name of the field with which this field will be associated. The parent/child relationship is most often set up as a connection between records, but can be used to relate any two arbitrary fields. However, since the first field that automatically shows up when using a REFERENCE_LIST is "id" and that is tied automatically to the three push buttons, you must change the three buttons if not using the "id" column. The second tab in the field Properties is Extended. It is a single List View ID. This must be unique among all parent/child control lists for the tab of that form. It is the same name as referenced in the Properties of the three buttons under their Extended tabs in their Associated Component fields.
2) Use steps 6-10 in "Create a new field".

Table of Contents





Remove a field.
If you delete a field from a form, it remains active in the database. That is, users can still query on it. In addition, if you attempt to create a new field with the same name, CQ will yell at you. Once a field exists in the schema repository, it's there forever.

Table of Contents





Toggle whether a field is mandatory.
Check out the appropriate schema. Go to Record Type folder -> Record Type -> States and Actions -> Behaviors. For the field in question, in turn right-click in each state and toggle Mandatory or Optional.

Table of Contents





Create a pull-down menu field.
In the Fields matrix, right-click in the Defect:Fields pane, select Add Field... and give it appropriate properties. Choose "CONSTANT LIST" in the Choice List column and fill in the list values. If this field's values are not going to be dependent on another's, check the Limit To List box. Open Forms -> Defect_Base_Submit and instead of dragging the new field from the "Field list for 'Defect'", click on the Control Palette icon for Drop-down List Box and place it on the form. Go to the Properties sheet of the new drop-down field and enter the Field Name of the field you just created in the Fields matrix. Test the work.

Table of Contents





Add help text to a field.
In the Fields matrix/grid, right-click on the field in question and select Field Properties. Then, select the Help Text tab and enter the information there. Once entered, the help text will show up in the Client by right-clicking on that field and choosing Help. You are limited to 255 characters of plain text.

Table of Contents





Create a dynamic/static/dependent choice list field.

Static choice list.
Simply select CONSTANT LIST in the Choice List column of the field matrix. A window will pop up asking for the values of the list. Use the Recalculate Choice List check-box if you want the choice list recalulated for each action, perhaps because this field's choices depend on another's. The Limit To List check-box means that you don't want the field's choices recalculated. The value choices are static, independent of other field values. To make the list a pull-down menu, see "Create a pull-down menu field".

Dynamic choice list.
A dynamic choice list allows users that have permission to edit a choice list from the Client. In the Designer Workspace, right-click on the Dynamic List Names folder and select Add. Give the new list an appropriate name. In the Fields matrix, click in the Choice List column for the field to be the dynamic choice list and choose DYNAMIC LIST. Choose the dynamic list you want to use and click OK to close that box.
In the Client, select Edit -> Named Lists -> dynamic-list. To add a new value, simply select a blank line a type it in. To insert a value between two existing ones, right-click in the box and select Insert. Do likewise to Remove a value. Multiple fields can be associated with a single dynamic list.
NOTE: If a record is currently open for edit, the changes made to a dynamic list will not show up until the next time the record is opened.
See also "Populate a dynamic list from a file".
Note that the contents of a dynamic list can only be populated from the Client and not the web interface.

Dynamic choice lists can also be populated using code. Instead of choosing "DYNAMIC LIST" when selecting the type of choice list, select "SCRIPTS->PERL" and place something similar to the following in code. By doing this you can do things like have a "blank" in the choice list so that the user can reset the value. Or, you can have multiple dynamic choice lists associated with the same field.
@choices = " ";
if ( $use_list_A ) {
   $list_reference = $session->GetListMembers("Reason_For_Close_A");
} else {
   $list_reference = $session->GetListMembers("Reason_For_Close_B");
}
push(@choices,@{$list_reference});
Dynamic lists automatically sort their contents. If you want a list to be static, such as "High", "Medium", "Low", you'll need to enter them as constants in the schema, otherwise it will come out as "High", "Low", "Medium" in the field's choice list.

Dependent choice list.
There are two ways to accomplish this, one from the parent side and the other from the dependent side.
1) In the "parent" field (the field upon which the other field is dependent), create a CONSTANT LIST of choices in the Choice List column. Set it to Limit to List. Write a hook in the Value Changed column that retrieves the value of the current field and uses it to set the choice list of the dependent field. The upside of doing it this way is that you don't have to set "Recalculate Choice list" for the dependent field. The dependent's field choice list changes ONLY whent the parent's value changes. The downside is you can't take advantage of dynamic lists in the dependent field, and also, up to at least CC 5.0, the SetFieldChioceList method doesn't work in VBScript. The array used to populate the dependent field's choice list can be generated as a list hard-coded in the parent field, as a query of information in CQ, or possibly retrieved from an external flat file. Note that in SetFieldChoiceList, you must pass the array itself and not values contained therein.
  my $value           = $entity->GetFieldValue($fieldname)->GetValue();
  my @dependent_array = <populate array somehow>;
  $entity->SetFieldChoiceList("dependent-field-name",\@dependent_array);
2) To have the dependent field be a list of values based on the value of the parent, write a hook in the Choice List column of the dependent field. Must set "Recalculate Choice list" in the dependent field's Choice List definition. Note that this field's values would be reevaluated ANY time ANY field value changes on the form. That's the unfortunate performance reality of "Recalculate Choice list".
  $OS = $entity->GetFieldValue("OS_version")->GetValue;
  if ($OS eq "UNIX") {
    push (@choices, "Solaris", "HP-UX","LINUX");
  } elsif ($OS eq "Windows") {
    push (@choices, "NT 4.0","98","95");
  }
  return @choices;
Table of Contents



Add a push button form control.
Push buttons are normally associated with record scripts that perform an action when the button is selected. To add a button to a form:
1) Open the form to be edited.
2) A box containing button type choices should appear automatically to the right of the form in the Designer. Look for the Push Button type and drag & drop it into the desired location on the form.
3) Right-click on the new button and set its name in the General tab and associate it with a Record Script in the Extended tab.
NOTE: In CQ Web, hook code associated with buttons is executed on the web server. Therefor, one should not invoke GUI based objects, as they will appear on the server, not on the web client. If the button returns text, to ensure the button is disabled for web clients, go to the Properties sheet of the button and deselect "Enabled for Web" in the Extended tab.

Table of Contents





Add a combo box form control.
This control allows the user to select from a pull-down list, type in one of the given choices (doesn't automatically complete based on minimum match) or type in their own choice (assuming the choice list wasn't set to "Limit To List"). It's a combo between a simple text field and pull-down menu.
1) Open the form to be edited.
2) Select the Combo Box form control from the Control Pallette and place it on the form. There are two different ones depending on whether you want the list to appear in full or be a pull-down menu.
3) Right-click on the new control and select Properties:
In the General tab, associate the new control with an existing field. That field should be one that has a choice list associated with it. The Label can be any name, but for clarity should probably correlate closely to the fieldname just selected. The X and Y selections designate the coordinates of the upper-left corner of the new control on the form. The Width and Height are self explanatory. If the height is smaller than the number of choices in the list, CQ will automatically add a vertical scroll bar.
In the Extended tab, the Auto Sort check box selects whether the entries in the choice list are sorted alpha-numerically. If not selected, choices appear in the order in which they were added.
In the Context Menu Hooks tab, record scripts can be added to the control's shortcut menu (right-click menu). The record script associated with this field must actually SetFieldValue.
In the Web Dependent Fields tab, in order to have dependent fields work with a web client, the field on which the dependency is based needs to be explicitly declared.

Table of Contents





Add a new tab.
If your form has more controls than can comfortably fit on the Main tab or need to be separated by category, one can simply add additional tabs to the form.
Once the form is displayed in the Designer for editing, select Edit -> Add Tab. Right-click on the new tab and select Tab Properties to give it a new name, select where it appears among the other tabs and optionally set User/Group access.

Table of Contents





Move a field between tabs.
With the form displayed for editing in the Designer, simply right-click on the field and select Cut. Go to the destination tab and Paste the field. It will by default be placed at the bottom of the tab. To move it to its permanent location, simply drag and drop both the box and its Label. Fields can be copied or moved between tabs on different forms as well.

Table of Contents





Remove a tab.
In the CQ Designer, with the form open for edit, select the tab to be deleted and Edit -> Delete Tab. If a tab is deleted, all controls in the tab are removed as well. They aren't deleted from the schema, just that tab. Delete Tab cannot be undone.

Table of Contents





Clone a form.
With CQ Designer open to the schema containing the form, simply export the form and then re-import it. CQ will automatically prompt for a new name upon import. See Export/import a form.

Table of Contents





Export/import a form.
To export, in the CQ Designer, right-click on the form and select Export Form. If the form is currently open, close it so that all changes are saved prior to export. The export will create a .frm file.
To import, in the CQ Designer, right-click on the Forms folder and select Import Form. The imported form must have a unique name.

Table of Contents





Create a "Keywords" type control.
The control type that is normally associated with Keywords or Symptoms isn't a single button type that can be chosen from the Form Controls menu. It's actually a combination of a few items.
Create a new field of type MULTILINE_STRING whose Choice List can either be DYNAMIC LIST or CONSTANT LIST . On the form, add a generic List Box control that isn't yet associated with any particular field. Don't drag the newly created field over from the "Field List for ..." dialog window. Enter the Properties sheet of the newly added control and associate it with the newly created field. You'll notice that the elipsis [...] button magically appears next to the List Box. You now have a Keywords type control.

Table of Contents





Export/import dynamic lists.
Updated: 04/04/17
Version: 8.0.1.14
Dynamic lists are normally built up by users logged into the CQ Client. However, lists can be populated via scripts with the following commands.
CQ 8.x and later
New 8.x, dynamic lists can be exported/imported programatically, which is much more efficient than using "importutil". The "s" is for source and the "d" is for destination.
$s_listnames_ref = $s_session_o->GetListDefNames;
$d_listnames_ref = $d_session_o->GetListDefNames;

$n_lists	= scalar(@$d_listnames_ref);
$n		= 0;

foreach $list_name (@$d_listnames_ref) {

	$n++;
	print "$n/$n_lists: $list_name\n";


	# The case can occur where a list exists in the destination and not the
	# source.  Don't clear out the destination if that's true.
	if ( ! grep(/^$list_name$/,@$s_listnames_ref) ) {
		print "\tList doesn't exist in the source database. Skipping.\n";
		next;
	}


	# Clear out the destination list.
	$d_member_ref = $d_session_o->GetListMembers($list_name);
	foreach $member (@$d_member_ref) {
		$d_session_o->DeleteListMember($list_name,"$member");
	}


	# Rebuild the destination list.
	$s_member_ref = $s_session_o->GetListMembers($list_name);
	foreach $member (@$s_member_ref) {
		$d_session_o->AddListMember($list_name,"$member");
	}
}
CQ 7.x and earlier
In the file below, each dynamic list entry must be alone on a separate line. Note that in either use case listed here, if the existing list already has some entries, the imported data will be appended to that list, if the entry is different.

As of CQ 7.1.1, to delete any existing values to simply "overwrite" the whole list, you'll have to do it manually by logging into the db. If doing things programmatically, say to initialize a new user db, there is no way to get the list of dynamic list names. You'll have to manually create an array containing the dynamic list names.
  # importutil importlist [-dbset dbset] cqlogin cqpassword db_name list_name inputfile_name
The same command(s) can be used copy the contents of a dynamic list to "seed" the contents of another list. Unfortunately, it can't be done completely from within the Designer. In the Designer, create a new dynamic list definition and push that change out to the user database. Use the following command to export a known list from a user database and then use the above importlist utility to populate the new, empty list.
  # importutil exportlist [-dbset dbset] cqlogin cqpassword db_name old_list_name inputfile_name
  # importutil importlist [-dbset dbset] cqlogin cqpassword db_name new_list_name inputfile_name
Users logged into the destination database will need to log out then back in to see the changes.

Table of Contents





Create a variable/dynamic Keyword list.
Keyword lists are useful when there is a need to run queries against a category of tickets, or tickets that are similar across projects. However, even though several projects may use the same schema, they may want different keyword lists. It's possible to have the keyword list be hooked to the value of another field, such as Project (as in the example steps below).
1) Create a Keywords field (if necessary) of Type MULTILINE_STRING. Set its Choice List to CONSTANT LIST, but don't put any values in the list. Set the "Limit to List" checkbox if desired. If the field already exists in your schema change its properties to match those listed here.
2) Import the Project package (if necessary) to create the Project field and Project stateless record type, or simply create one yourself.
3) In the Project field's Value Changed hook, place the following code:
	my @Choices = ("");
	$entity->SetFieldChoiceList("Keywords",\@Choices);
	$entity->SetFieldValue("Keywords","");
	my $project = $entity->GetFieldValue($fieldname)->GetValue;
	if("$project" ne "") {
		my $list = $session->GetListMembers($project);
		$entity->SetFieldChoiceList("Keywords",$list);
	}
4) For any Project that will be defined/created in the Client, define/create a corresponding Dynamic List in the Designer with naming convention "project_keywords". Because dynamic list names cannot have any spaces, the project names cannot either. Yes, the hook code could be modified to allow for that, but it isn't written that way currently.
5) In the client, populate the dynamic lists for each project. The Keywords list should follow the name of the project. The hook will clear the Keywords list if the value of Project is changed.

Table of Contents





Add an image to a form.
Bitmap images can be added to a form for a Windows Client ONLY. It won't work for UNIX or CQ Web users.
With a form open for edit in the Designer, simply drag the "picture" icon from the control pallatte to the form. Right-click on the picture form control and select a bmp, gif, or jpg.

Table of Contents





Add a Duplicate form control.
The control called Duplicate is a built-in function that cannot be altered and has no fields associated with it. When a form is open for edit, simply click on the button types called Duplicate Base and Duplicate Dependent and place them on the form. The Duplicate Base is a single record id that this ticket is a duplicate of. The Duplicate Dependent is a list of records that are duplicates of this ticket.

Table of Contents





Add an Option (radio button) control.
Radion buttons are a series of on/off switches connected in such a way that only one switch can be "on" at a time. The following example will create radion buttons to indicate whether a ticket is a defect or enhancement.
1) Create a field called "Defect_or_Enhancement" of type SHORT_STRING. Give it a Default Value of "Defect". Whatever Value is assigned in steps 3 & 4 will be stored in this field.
2) Place a pair of Option Buttons on a form.
3) In the Properties of one of the buttons, on the General tab, set the Field Name to the field created in step (1). Set the Label to "Defect?". On the Extended tab, set the Group Name to any unique string. Set the Group Label to "Defect or Enhancement:". Set the Value to "Defect".
4) In the Properties sheet of the other button, on the General tab, set the Field Name to the field created in step (1). Set the Label to "Enhancement?". On the Extended tab, set the Group Name to the same one used in step (3). The Group Label will automatically be set to the same one set in step (3). There is only one Group Label per Group Name. Set the Value to "Enhancement".
When a user runs a query on the field from step (1), it will be a string whose value is either "Defect" or "Enhancement" and can't be anything else. You should probably make the field READ_ONLY for states such as Resolved, Closed, Postponed, etc...

Table of Contents





Set the field tab order.
Updated: 07/29/10
The tab order determines which control receives the focus when a user presses the Tab key. Each time the user presses Tab, the focus moves to the next control in the tab order. By default, the tab order of controls is the order in which you added the controls to the form. You can change the tab order so that it reflects the order in which you expect your users to use the controls.
To change the tab order of controls:
1. Select the dialog tab containing the controls whose tab order you want to set.
2. Select Form Layout > Set Tab Order. ClearQuest changes to tab-order mode. In this mode, each control displays a number indicating its position in the tab order.
3. Click the control you want to be first in the tab order.
4. Click each of the remaining controls in the order you want them to receive the focus. As you click each control, its displayed number changes to match the new tab order. After you click each of the controls once, ClearQuest exits tab-order mode. You can also exit tab-order mode by clicking in an empty portion of the dialog tab.

Tab order works for editing or just viewing records. Tab order works in both the fat client and web interfaces. Fields that are read-only and currently do not have a value are automatically skipped. Don't bother setting the tab order of the form control labels, as they don't accept the Tab key when working with a record anyway.

Table of Contents





Embed a URL in a description field.
Updated: 09/13/06
As of CQ 2002, a complete URL embedded in a description field will be automatically detected and made clickable for the user. If the URL contains spaces, it must be enclosed in double-quotes.
As of CQ 7.0, you can generate URLs using a wizard in the web interace. Log into the database and select "Shortcuts" under the New menu in the toolbar.
To embed a URL in an email notification, see Add free-form text to an email rule.
Note that if the URL that the user clicks is, say, a MAILTO: URL, and, say, the body of that email is the URL for the current record, the embedded URL will need to be URL encoded to protect its special characters. Unfortunately, as of this writing, a "mailto" hyperlink doesn't work in the web interface. For example:
If the URL you want to appear in the body of the email is:

  http://cqwebserver/cqweb/main?command=GenerateMainFrame&service=CQ&schema=Enterprise&contextid=DB01&entityID=12345678&entityDefName=Defect

you would create the MAILTO link such that the above special characters are URL encoded:

  MAILTO:elvis.presley?subject=DB0100001234&body=http%3A%2F%2Fcqwebserver%2Fcqweb%3Amain%3Fcommand%3DGenerateMainFrame%26service%3DCQ%26schema%3DEnterprise%26contextid%3DDB01%26entityID%3D12345678%26entityDefName%3DDefect

:	%3A
=	%3D
/	%2F
?	%3F
&	%26
See http://www.blooberry.com/indexdot/html/topics/urlencoding.htm

WARNING: The URL format was changed in CQ 7.0. Your URL generator will need to be modified and URLs in existing records will need to be updated.

Table of Contents





Date/time form control WARNING.
Updated: 08/23/06
Field definitions in CQ can fire a hook when the field's value has changed. Under normal circumstances, that hook doesn't fire unless the field's value has actually been changed. That sounds like an obvious statement, but, that hook will also fire for a date/time field if the "time" is not displayed. That is, if a date/time field isn't configured to display the "time", if you even just Modify a record, that field's Value Changed hook will fire immediately, regardless. Beware!
The reason, it is surmised, is that the data retrieved from the database has a time associated with, but because it isn't displayed, CQ is interpreting it as a difference. If you don't have any code in the Value Changed hook, no harm done. But, if there is code that perhaps makes a modification elsewhere in the ticket based on the new value, CQ WILL pick up on that as a change. The real problem here is that you don't expect that hook to be firing and changing the ticket when that field hasn't been modified. This gets even more insidious if you are editing a bunch tickets in "batch" mode in the Client. Since the first ticket picked up on a change to that date/time field, the changed value elsewhere in the first ticket will be applied to all subsequent tickets, even though you did not actually change the date/time value. Beware!

Table of Contents





Dynamically change a field's label on a form.
Updated: 09/20/06
When creating a field in the Designer, you give that field a "label" (the name that appears next to the field for the user). That label is then static. However, it is possible to dynamically change text that appears next to a field.
Create the field you want (FieldA), but don't give it a label. Create a second field called, say, FieldA_label. Place FieldA on the form. Next to that field on the form, place a "static text" form control. Assign that static text form control to FieldA_label. Now, any value you place into the FieldA_label field will appear next to FieldA as text, which will look like the field's label to the user. That value can be changed at any time in hook code.
However, this trick has three downsides:
1) FieldA cannot be mandatory at any time. The way a user knows that a field is mandatory is that the field's label turns red. But, since FieldA doesn't have a label, the user won't know that it's mandatory. Well, actually, they will know it when they Save the record. So, if you can live with getting the abrupt error message when you Save, you can have the field be mandatory.
2) In the web, if you change the value of FieldA_label, that change may not show up immediately, as the web record doesn't always refresh. You can give the user a Refresh Record button, but that's a hassle for the user. So, there is a record refresh problem when changing the FieldA_label text.
3) The user will see think that the name of the field is whatever value is in FieldA_label. It will be difficult for them to remember what the actual field name is when writing a query.

Table of Contents





Manually set the db column name.
Updated: 03/28/07
When adding a new field, CQ automatically assigns it a db column name. The column name is the all lower-case version of the field's name. However, at that time, you are allowed to change/set the column name to any unused name.

NOTE: When a field is deleted from CQ, it is only a logical deletion. The column of data remains in the database. Therefore, if a field was previously deleted, while a new field can have the same name, the column name must be different.

NOTE: If a schema is imported into a different schema repo and assigned to a database, the column names will all be the default values. If there are scripts that work at the SQL level, they might have to take the different column names into account.

Table of Contents





Scrolling.
Updated: 03/25/10
Form controls can be fitted with scroll bars to see text that is outside the size of the control. Scroll bars are added by right-clicking on the form control in the Designer.
"Auto" scroll allows the user to move the cursor with a keyboard arrow key to see the hidden text.
Horizontal and vertical scroll bars can be added as well. However, even after adding them to the form control, they won't appear on the form until they are actually needed.

Table of Contents





Include an ampersand (&) in a field label.
Updated: 06/10/10
If you include an ampersand in a field label, it will not show up as that, but will rather place an underscore under the next character for the purpose of hot keys.
However, if you want an actual ampersand there, simply place two of them next to each other, as in "Description && Workaround:", then the ampersand will simply show up as a regular character on the form.

Table of Contents





Backfill data from one field to another en masse.
Updated: 07/19/11
A mass data backfill may be needed if a new field is replaced by another of, say, a different data type. For example, if you change an integer field into a short string field, you have to create a new field of the new type and then backfill the data from the original field.

Batch Jobs
CQ allows you to modify many records by selecting them all and performing an action, such as Modify. In the Client, run a query that selects the records to be updated. Select all of them and then choose the Modify action. Update the field in question and Save. This is a simple way to backfill records, but has some downsides. If the value you want is variable, such as an application name, this would backfill the value from the first record into all the remaining records, which is undesirable. Also, if there are any validation issues, which can happen with old records, the backfill won't occur.

Scripting
You can write a Perl script to access the records using the API. This gives a lot of flexibility in manipulating and backfilling data, but still has the downside that if there are unrelated validation issues, it will be a problem trying to get the records committed. Also, this method can be VERY slow for thousands of records.

SQL
The most efficient way to backfill the data is using SQL.
  SQL> update record_type set new_field_name = old_field_name where new_field_name is null and old_field_name is not null;
Import Tool
The SQL method won't work if either of the fields is of type REFERENCE. Reference fields are not actually part of the record, but rather use intermediate tables that make the SQL complicated. If either field is a reference, take the following steps. While slow, this is probably the cleanest way to backfill a new field.
1) Create a text file that is in the format of a CQ import file. Use a query or Perl script to that. The file needs to be in the format where the first row is a header row that has the field names. All subsequent rows are the data. It needs the IDs to know which records to update, and only needs one other the column, which is the data to be imported. "id","Applciation" "MYDB00002340","MyApp" "MYDB00002342","Application2" "MYDB00002418","App3" ...
2) Then, simply use the CQ Import Tool to import the data. The good thing about the import tool in this scenario is that it doesn't run any validations, which is useful when backfilling old records. Note that if the new field is a reference, the value being placed into it needs to be an existing record of that type. It does do that validation.

Table of Contents





Difference between Drop-down Combo and Drop-down List boxes.
Updated: 07/24/10
Both form controls allow the user to go to a value using a scroll bar.
The Combo box allows the user to type in a case-sensitive string and the system will jump to the first string that matches the characters. The List box does not allow a user to type in a value.
Warning: If the Combo box is a REFERENCE type field and the user types in a value that isn't a valid reference then moves the mouse to a different field (changes the focus), the system will throw an error that the entered value isn't a valid reference. Unfortunately, it's not a user-friendly error.

Table of Contents





Determine if a dynamic list is being used by a schema.
Updated: 08/04/09
CQ version: 7.0.1

Unfortunately, it isn't possible to definitively, programmatically determine if a given dynamic list is actually being used by a schema.
Dynamic lists can be used as a field's choice list or to store information that is called up in a hook via the GetListMembers call. You can use SQL to determine if a list is being referenced by a field's choice hook, and you'll have to use "Find in Hooks" to know if GetListMembers is being called. Note that the closest you can get with the API is to know if a field's choice list is "DYNAMIC LIST", but it won't tell which dynamic list. So, you can determine it definitively, but only by combining a few different methods.

Table of Contents





Resize a form in the Eclipse Designer.
Updated: 08/17/12
CQ version: 7.1.2

Resizing a form in the Eclipse-base Designer can be tricky the first time you do it. Resizing can only be done if the schema is checked out. If you click on a form to make it the focus, the default form size controls in the corners turn into a crossed arrows pointers, which allow you to drag the form, but not resize it. However, if you click on the tiny, tiny border that surrounds the form, you can get to the form resizing controls.

Table of Contents





Create a list of fields changed during an action.
Updated: 02/12/16
CQ version: 7.1.2

What changes were made during an action can be captured by the Audit_Trail package. However, for the purpose of, say, sending an email to users about changes made during a given action, the audit trail can be difficult to parse.
An alternative to parsing the audit trail is to use the GetFieldsUpdatedThisAction method. That method returns an array of field objects, on which you can run GetFieldValue and GetFieldOriginalValue.

Table of Contents





Programmatically determine the contents of a dynamic/named list.
Updated: 03/27/17
CQ version: 8.0.1.14

See the session methods called GetListMembers, SetListMembers, DeleteListMember, and AddListMember.

Table of Contents





Set up web dependent field.
Updated: 08/22/17
CQ version: 8.0.1.14

If a field's choice list or value is updated by, say, another field's value changed hook, the changes won't appear when using the web interface unless the field has been enabled as a web dependent field.
Prior to 8.0.1.07 the web dependent property was set in the Designer in the field's Properties. Now that property is set via the command line using the following:
	packageutil setwebdependentfields -dbset dbset admin-login admin-password schema record-type fieldname 1 -nocheckin
The schema must be checked in when the above command is executed, as it will check it out to make the change. The value of "1" sets the web dependency. A value of "0" would remove the web dependency. Always specify the -nocheckin option so that you can upgrade the user database and test the change before checking back in.
The schema repo must be at feature level 8.

Table of Contents





List web dependent fields.
Updated: 08/22/17
CQ version: 8.0.1.14

If a field's choice list or value is updated by, say, another field's value changed hook, the changes won't appear when using the web interface unless the field has been enabled as a web dependent field.
Prior to 8.0.1.07 the web dependent property was set in the Designer in the field's Properties. Web dependent fields can be listed using the following:
	packageutil showwebdependentfields -dbset dbset admin-login admin-password schema record-type
Table of Contents



Get a field's database column name.
Updated: 06/18/18
CQ version: 8.0.1.14

Designer
In the Schema Designer, simply right click on the field to get its Properties.

API
Unfortunately, there doesn't appear to be an API method for getting a field's corresponding database column name.

Table of Contents





Determine if a field is being displayed on a form.
Updated: 08/02/18
CQ version: 9.0.1.3

There is no search tool or field property information that will readily tell you if a field is being used by a form control.
Moreover, you can't inspect a "cqload exportintegration" of the schema either, as the "formdef" information is encoded (not human readable).
Other than manually inspecting the form in the Designer, I don't know of a way to determine if a field is used by a form control. You simply have to inspect each form control to find the one that is referencing the field you're interested in.

Table of Contents





Determine the max length of a field.
Updated: 08/16/18
CQ version: 9.0.1.3

The following will give the largest value ever entered into a field.
	SELECT max(length(column-name)) FROM table_name;
The following will give you the maximum size possible, as defined in the database. Note that the table name is in all upper-case.
	SELECT colname,length FROM syscat.columns WHERE tabname='TABLE-NAME'";
The list of columns available in syscat.columns can be found at https://www.ibm.com/support/knowledgecenter/SSCRJT_5.0.1/com.ibm.swg.im.bigsql.commsql.doc/doc/r0001038.html.

Table of Contents





Get a field's title.
Updated: 09/12/18
CQ version: 9.0.1.4

Getting a field's title can be useful when presenting the user with an error message, as there are many times where the title that appears on the form is different than the actual field name.
Unfortunately, I don't know of any programmatic way to get that information. The best you can do is hard-code the "title" in a custom error message.

Table of Contents





Create a field/action hook.
Field and action hooks can be one of the following types. The following are executed in order when and action begins:
Action Access Control Is the user allowed to execute this action?
Field Permission Determines whether a field is mandatory, optional or read-only.
Action Initialization Sets up field values before the action begins.
Field Default Value Sets a default field value. Submit actions only.
Field Validation Validates the field entry immediately after a change. Not applicable to import action.
Field Choice List Runs for each field that uses the Recalculate Choice List option.
The following are executed in order when a field value is set:
Field Value Changed Runs for each field that changes.
Field Validation Runs for each field that changed.
Field Choice List Runs for each field that changed and whose Recalculate Choice List is set.
NOTE: In CQ Web, field hooks only run when the Submit button is selected.

The following are executed when a record is validated:
Field Validation Runs for each field on the record.
Action Validation Runs for the performed action.
The following are executed when a record is committed. The record is first saved to the database without committing it, then the following are run:
Action Commit Executes for the performed action.
- Commits the changes to the database.
Action Notification Sends out notifications based on established e-mail rules.

In the CQ Designer, from the Workspace, open a Fields or Actions table. Single left-click in the appropriate column next to the field/action in question. Select the arrow, then SCRIPTS and the hook language of your choice (PERL or BASIC). CQ will automatically open a script editor with a subroutine already set up for you.
NOTE: A given schema will only support hooks written in one or the other language. The Windows Scripting Language is set in Workspace pane -> Schema Properties. The default is BASIC. Once the language is chosen, all hooks for that schema must be written in the same language.
Once you have edited the script, select Hooks -> Compile and CQ will run a syntax check on your script. If error free, simply close the editor and the changes will be saved. Then, File -> Test Work.

Some common ways to retrieve information. The placement of the capital letters is mandatory. The current entity is already set in $entity for you.
NOTE: It isn't possible to directly get the output of a system call. That is, a command like ($var = `command`;) won't return the output of "command". At best, you can redirect the output to a file and then open read the contents of the file.

get a field value
Perl:       $field_value = $entity->GetFieldValue("fieldname")->GetValue;
VBScript:   fieldvalue = entity.GetFieldValue("fieldname").GetValue

choice lists
  $entity->InvalidateFieldChoiceList("list-fieldname");
  $entity->SetFieldChoiceList("fieldname","dynamic-list-name");
  @members = $session->GetListMembers(dynamic-list-name);

  $fieldChoiceListObj = $entity->GetFieldChoiceList("list-fieldname");
  $entity->SetFieldValue("other-fieldname",$fieldChoiceListObj->Item (0));
  @fielChoiceListValues = @$fieldChoiceListObj;

get the previous field value
  $old_fieldvalue = $entity->GetFieldOriginalValue("fieldname")->GetValue;

set a field value
  $entity->SetFieldValue("fieldname","value");

get the current action
  $action = $entity->GetActionName();

get the current state
  $state = $entity->LookupStateName();

get the previous state
  $old_state = $entity->GetFieldOriginalValue("State")->GetValue;

current user
  $login_name = $entity->GetSession->GetUserLoginName;

refer to a different record
  $other_entity = $entity->GetSession->GetEntity("other-entity-name",record-key);

output debug statements (on Windows, the debug statements can be viewed with dbwin32)
  $session->OutputDebugString("string");

set a default date
  $entity->SetFieldValue("fieldname",GetCurrentDate);

set requiredness in a different field
the requiredness is one of MANDATORY, OPTIONAL, or READONLY
  $entity->SetFieldRequirednessForCurrentAction("other-fieldname",$CQPerlExt::requiredness);

get the current record type's name
  record_type = sessionObj.GetEntityDef(GetEntityDefName).GetName 

For more information on the CQ API, see \apihelp\index.htm.
NOTE: Duplicate "my" declarations in a Perl hook will cause your script to produce erratic and unpredicable results. For example, even though "my" declaration can occur once per invocation of the following code, the multiple declarations will cause the whole "if" statement to not work.
if ( "$character" eq "x" ) {
  my $letter = "x";
} elsif ( "$character" eq "y" ) {
  my $letter = "y";
} else {
  my $letter = "z";
}

Instead, use:

my $letter;
if ( "$character" eq "x" ) {
  $letter = "x";
} elsif ( "$character" eq "y" ) {
  $letter = "y";
} else {
  $letter = "z";
}
Table of Contents



Remove a field/action hook.
In the Defect:Fields matrix, simply click on the hook, click on the little arrow and choose NONE. The hook code (subroutine) will go away for that field for that type.

Table of Contents





Create a permission field hook.
Create the hook in the Permission column of the field matrix and give it code similar to the following:
  if ( $username ne "admin" ) {
    $result = $CQPerlExt::CQ_READONLY; 
  } else {
    $result = $CQPerlExt::CQ_OPTIONAL;
  }

  If username <> "admin" Then
    result = AD_READONLY
  Else
    result = AD_OPTIONAL
  End If
Another option to return is $CQPerlExt::CQ_MANDATORY or AD_MANDATORY.

Table of Contents





Create a validation field hook.
Updated: 12/10/09
In the Fields matrix, click in the Validation column of the field you want validated and choose the preferred scripting language. Your code should simply return a non-null "result" if there is a problem with the validation. When satisfied with the changes, select Hooks -> Compile.

Note: A field's validtion hook runs every time any field is changed on a record. In other words, if any field on a record is changed, the system runs all field validation hooks. For that reason, you don't want to put code in there that is too labor intensive, as it will get run multiple times per record edit.
There are a couple of ways around that potential performance problem. 1) Place the field validation code into an action validation hook. 2) In the field's Value Changed hook, SetNameValue("fieldname_valuechanged","1"). In the field's Validation hook, start with -- if GetNameValue("fieldname_valuechanged") eq "1"... . That way, even though the Validation hook gets run many times during a record edit, the bulky code there will only get run if that particular field has been changed.

Table of Contents





Create a global script.
In CQ, one has the ability to write code that can be called from within any hook as a way to help centralize hook code maintenance.
1) In the Workspace, expand the Global Scripts folder.
2) Right-click on the desired language and choose Add.
3) Right-click on the new global script and choose Rename; must be unique.
4) The script must be self-contained within a subroutine appropriate for the language chosen. When done, select Hooks -> Compile to test and save the code. The subroutine can now be called from within any hook.

Note that global scripts should only be created for code that is to be utilized across multiple record types. If the code is to be used in multiple places inside a single record, create a record script instead. The reason is that global scripts get loaded regardless of record type at login. If all of your code is in global scripts, it may cause an undue overhead in the web interface.

Table of Contents





Create a record script.
In CQ, one can have the user invoke a script by selecting it via a pull-down menu, push button or called from within a hook. The script can then do most anything, most often returning information to the user.
1) In the Workspace, expand the record type with which this script will be associated until you see the Record Scripts folder.
2) In the Record Scripts folder, right-click on the desired language and choose Add, renaming the new script to something nmemonic and unique.
3) Double-click on the script name to invoke the editor. Unlike Global Scripts, these are already placed in an appropriately named subroutine for you. Select Hooks -> Compile to test and save your work.
NOTE: If the Record Script is associated with a push button and invoked from the web and the script returns a string, CQ Web will interpret is as an error message and the hook will fail. Also, do not invoke GUI based objects from the script if working with the web, as they will appear on the server, not on the client.

Table of Contents





Get an array of values for a specific user group's field.
# Usage:  @result = GetGroupMembersField(<group_name>,<field_name>);
#    ex:  @fullnames = GetGroupMembersField("Managers","fullname");
sub GetGroupMembersField {

        my @result;
        my @group = $_[0];
        my $ref   = \@group;
        my $field = $_[1];

        my $queryDef = $session->BuildQuery("users");
        $queryDef->BuildField($field);
        my $filterNode = $queryDef->BuildFilterOperator($CQPerlExt::CQ_BOOL_OP_AND);
        $filterNode->BuildFilter("groups",$CQPerlExt::CQ_COMP_OP_EQ,$ref);
        my $resultSet = $session->BuildResultSet($queryDef);
        $resultSet->Execute();
        my $status = $resultSet->MoveNext;
        while ( "$status" eq "$CQPerlExt::CQ_SUCCESS" ) {
                push(@result,$resultSet->GetColumnValue(1));
                $status = $resultSet->MoveNext;
        }
        return @result;
}
Table of Contents



Get a user value based on another user value.
The following Perl code should be placed in a Global Script.

# This routine will retrieve a user field value based on
# another of the same user's fields.  The first argument is
# the field name of the known value.  The second argument is
# the known value.  The third argument is the field
# name of the value you wish to retrieve.
# If the known value is not found, nothing is returned without error.
# For example, if you know the user's email address and want to
# know their login:
# $login = GetUserValue("email",$user_email,"login_name");
sub GetUserValue {

	my $known_fieldname   = $_[0];
	my @known_value       = $_[1];
	my $unknown_fieldname = $_[2];

	# Start building a query of users.
	my $QueryDefObj = $session->BuildQuery("users");

	# Have the output be the unknown value.
	$QueryDefObj->BuildField($unknown_fieldname);

	# Filter on the designated known value.
	my $FilterOp = $QueryDefObj->BuildFilterOperator($CQPerlExt::CQ_BOOL_OP_AND);
	$FilterOp->BuildFilter($known_fieldname,$CQPerlExt::CQ_COMP_OP_EQ,\@known_value);

	# Execute the query and return the result.
	my $ResultSetObj = $session->BuildResultSet($QueryDefObj);
	$ResultSetObj->Execute;
	if ( $ResultSetObj->MoveNext == $CQPerlExt::CQ_SUCCESS ) {
		return $ResultSetObj->GetColumnValue(1);
	}

	return;
}
Table of Contents



Automatically transfer record mastership.
In CQMS, individual records are mastered at a single site. Mastership can only be transferred by the site that currently masters the record. One only needs to change the value of the ratl_mastership field. If a database has not been replicated yet, the ratl_mastership field value is simply "<local>". Place the following code in the Submit action Commit hook. The remote site will not actually gain mastership until after the next scheduled synchronization.
  $entity->SetFieldValue("ratl_mastership","remote-site-replica-name");
Table of Contents



Web hooks.
Hooks will run on the web server with CQ Web.
To web enable a field, on the form enter the parent field's button properties. This only needs to be done on the parent field. In the Web Dependent Fields tab, add the names of any dependent fields.
Also for dependent fields, the box types of the parent and dependent field must be one of Drop-down List Box, Combo Box, or Drop-down Combo Box. For performance reasons as well, avoid the List Box control in the web interface.

Table of Contents





Set permission by user.
The Behaviors matrix doesn't have a "User Groups" option like the Actions matrix Access Control does. So, the administrator must write a hook to set conditional permission in a field. The follow code will return an optional or read-only result based upon what fullname is set in another field. That is, the SetPermissionByUser is called from a Permission hook in a field whose permissions are to be set based upon who is logged in and what fullname is another field. This code should be placed in a Global Script.
##################
# Permission is based on who is set in the specified
# field.  The field is assumed to contain the "fullname"
# of the user.  If the current user matches the fullname
# in the specified field, the result is set as optional,
# otherwise the result is set as read-only.

# Usage:
# $result = SetPermissionByUser("Reviewer1");

sub SetPermissionByUser {

	my $field = $_[0];
	my $field_value,$current_user;

	$field_value = $entity->GetFieldValue($field)->GetValue();
	$result      = $CQPerlExt::CQ_READONLY;
	if ( "$field_value" ) {
		$current_user = $entity->GetSession->GetUserFullName;
		if ( "$current_user" eq "$field_value" ) {
			$result = $CQPerlExt::CQ_OPTIONAL;
		}
	}

	return $result;
}
##################
Table of Contents



Set permission by group.
The Behaviors matrix doesn't have a "User Groups" option like the Actions matrix Access Control does. So, the administrator must write a hook to set conditional permission in a field. The follow code will return an optional or read-only result based upon whether or not the current user is a member of the specified group. This code should be placed in a Global Script.
##################
# This will return an optional permission if the
# current user is a member of the specified group.
# This routine will only work when called from a field
# permission hook whose Behavior has been set to
# USE_HOOK.

# Usage:
# $result = SetPermissionByGroup("GroupName");

sub SetPermissionByGroup {

	my $groupname = $_[0];
	my $username  = $entity->GetSession->GetUserLoginName;
	my @groups    = $entity->GetSession->GetUserGroups;
	my $group;

	$result = $CQPerlExt::CQ_READONLY;
	foreach $group (@groups) {
		if ( "$group" eq "$groupname" ) {
			$result = $CQPerlExt::CQ_OPTIONAL;
			last;
		}
	}

	return $result;
}
##################
Table of Contents



Get active records.
The following Perl code will return a list of record values associated with active records. It should be placed in the schema as a Global Script. For example, you may want to return a list of the fullnames of active users.

# Build a choice list of active records.
# The first argument is the record type, the second
# argument is the field that holds a 1 or 0 that
# indicates whether or not a record is active.  The
# third argument is the field whose values will be
# returned in an array.

# Usage:
# @choices = GetActiveRecords("Project","Project_is_active","Project_Name");
# -or-
# @choices = GetActiveRecords("users","is_active","fullname");

sub GetActiveRecords {

	my $record_type      = $_[0];
	my $active_fieldname = $_[1];
	my $return_fieldname = $_[2];
	my @choices;

	# Start building a query of the record type.
	my $QueryDefObj = $session->BuildQuery($record_type);

	# Build the list of output fields.
	$QueryDefObj->BuildField($return_fieldname);
	$QueryDefObj->BuildField($active_fieldname);

	# Execute the query and return the result.
	my $ResultSetObj = $session->BuildResultSet($QueryDefObj);
	$ResultSetObj->Execute;
	while ( $ResultSetObj->MoveNext == $CQPerlExt::CQ_SUCCESS ) {
		if ( $ResultSetObj->GetColumnValue(2) == 1 ) {
			push(@choices,$ResultSetObj->GetColumnValue(1));
		}
	}
	return @choices;

}

Table of Contents



Create an Action Notification hook.
Email rules are an easy way for users set up automated communications. However, if you want hard-coded, process type of communications, you'll need to lock the email rule records down that are created by the process team. See Lock down certain email rules. Email rules also have a couple downsides, in that you cannot control very finely how the information is displayed, nor can you add explanatory text. Actually, there is a work-around for that one. See Add free-form text to an email rule.
An alternative to email rules that allows finer control of displayed information is an Action Notification hook. The following is an Action Notification hook example in Perl. Note that my schemas often contain many "hidden" fields that track information for me. The following would need to be modified to show only relevant fields.

# This hook will build a list of field values that have changed.

my ($fieldnames,$session,$fieldname,$oldvalue,$newvalue);
$session = $entity->GetSession;


# Loop through the fields.
$fieldnames = $entity->GetFieldNames;
foreach $fieldname (@$fieldnames) {

    $oldvalue = $entity->GetFieldOriginalValue($fieldname)->GetValue;
    $newvalue = $entity->GetFieldValue($fieldname)->GetValue;

    # Compare the values
    if ( ! "$oldvalue" && "$newvalue" ) {
        $session->OutputDebugString("Field ($fieldname) newly set to ($newvalue).\n");
    } elsif ( "$oldvalue" && ! "$newvalue" ) {
	$session->OutputDebugString("Field ($fieldname) has been cleared.\n");
    } elsif ( "$oldValue" ne "$newvalue" ) {
	$session->OutputDebugString("Field ($fieldname) changed from ($oldvalue) to ($newvalue).\n");
    }
}
Table of Contents



Ensure a specified date is not in the past.
The following is written to be used as a Global Script called from a DATE_TIME field's Validation hook.
# ######################
# Ensure the specified date is not previous to today.
# This is designed to be called from a data field's
# validation hook.
# Usage:
# $result = CheckDate($entity->GetFieldValue($fieldname)->GetValue);

sub CheckDate {

	my $field_date = $_[0];
	return if ( ! "$field_date" );

	$field_date      = (split(/ /,$field_date))[0];
	$field_date      =~ s/\-//g;
	my $current_date = GetCurrentDate;
	$current_date    = (split(/ /,$current_date))[0];
	$current_date    =~ s/\-//g;
	if ( $field_date < $current_date ) {
		return $result = "The specified date is in the past: $field_date";
	}

	return;
}

Table of Contents





Use regular expressions in VBScript.
Updated: 02/24/06
The following is sample code that ensures an entered ticket number conforms to a standard. This code goes in the field's Validation hook.

	' '''''''''''''''''''''''''''''
        ' A ticket number must adhere to the format of CRdddddd, 
        ' where "dddddd" is a 6-digit number. 
        dim value 
        value = GetFieldValue("Ticket").GetValue 
        if value <> "" then
                set regEx = New regexp 
                regEx.Pattern    = "CR\d{6}" 
                regEx.IgnoreCase = True 
                Set Matches = regEx.Execute(value) 
                match_not_found = 1 
                for each Match in Matches 
	                match_not_found = 0 
                next 
                if match_not_found then 
                        Ticket_Validation = "Ticket numbers must adhere to CRdddddd, where dddddd is 6-digit number."

                end if 
        end if 

Table of Contents





Determine the unique key field(s) of a record type.
Updated: 12/22/15
Version: 7.1.2
There is no way to programmatically determine which field name(s) of a given record type is(are) designated as the unique key field(s).
If your purpose for getting the unique key is to determine its value, you can just use $entity->GetDisplayName.
Because multiple unique key values are space-separated in the unique key and there may be spaces in the value defining the unique key, there is no definitive way to programmatically parse the unique key to determine the separate field values. For that reason alone, it's recommended that unique key field values be limited to those without embedded spaces.
Or, you can run a query that returns the dbid and then use $session->GetEntityByDbId.
The unique key's value can also be returned in a query by using the following construct. Unfortunately, GetColumnLabel only returns "dbid", which isn't usually the unique key field name.
	$queryDefObj = $session->BuildQuery("System");
	$queryFieldDefObj = $queryDefObj->BuildUniqueKeyField();
	$queryFieldDefObj->SetIsShown(1);
	$resultSetObj = $session->BuildResultSet($queryDefObj);
	$resultSetObj->Execute;
	if ( $resultSetObj->MoveNext == $CQPerlExt::CQ_SUCCESS ) {
		print $resultSetObj->GetColumnValue(1)."\n";
	}
In the CQ Designer, right-click on the record type and select Unique Key.
Note that the order of multiple unique key fields is the order in which they were added to the record type, which may not be alphabetical order.

Table of Contents





GetEntity if the record type has more than one unique key.
Updated: 03/31/06
Basically you concatenate the display names with a space. The field order is the order in which they appear in the record type. That is, the order is in which they appear when you first bring up the Fields matrix in the Designer, and not necessarily in alpha-numeric order. Yes, even though you're passing in a space-separated list of values, the values themselves can have embedded spaces.
For example, in VBScript:
	set recordObj = sessionObj.GetEntity("RecordTypeName", "key1_value" & " " & "key2_value")
Table of Contents



Determine the most recent checkedin version of a schema.
Updated: 06/08/06
There may be a more straight forward way to accomplish this, but this works.

$schema_name = "Enterprise";

# Log into an admin session.
$adminSession = CQAdminSession::Build;
$adminSession->Logon($login,$passwd,$dbset);

# Loop through the list of schemas.
$schemasObj = $adminSession->GetSchemas;
for ( $x = 0; $x < $schemasObj->Count; $x++ ) {

	# Get the schema name of each schema and compare it.
	$schemaObj	= $schemasObj->Item($x);
	$schemaName	= $schemaObj->GetName;
	if ( $schemaName eq $schema_name ) {

		# Get a list of the schema revisions and then get the ID of the last one.
		$schemaRevsObj	= $schemaObj->GetSchemaRevs;
		$schemaRevObj	= $schemaRevsObj->Item($schemaRevsObj->Count - 1);
		$schemaRev	= $schemaRevObj->GetRevID;
		last;
	}
}
print "($schema_name) schema is at version ($schemaRev).\n";
CQSession::Unbuild($adminSession);

Table of Contents



Debug a hook.
Updated: 06/13/06
MsgBox (Client only, Windows only, PERL and VBScript)
This function lets you place a Windows Message Box on the screen with the output you specify. The execution of the hook pauses until the OK button on the Box is clicked (for example, MsgBox "My Text."). The message box only displays where the hook is executed. When writing VBScript or Perl hooks, you can use the message box (MsgBox) function to output debugging information. By calling this utility with a string parameter, a popup dialog containing the text is displayed. You can use MsgBox in Perl with the following syntax: eval("use Win32; Win32::MsgBox(’called from Perl’)"); Note: Do not invoke this utility through CQ Web. If you use the MsgBox function, you can ensure that your code is not executed in a Web session context with the _CQ_WEB_SESSION session variable.

OutputDebugString (Client only, Windows or Unix, PERL or VBscript)
When dbwin32.exe is active, it displays all messages generated by the OutputDebugString method of the Session Object, which you can use to output debugging messages from a hook while it is running. By calling the OutputDebugString method, the related debug statements appear in the DBWin32 console, along with any configured tracing information. Use this after launching DBWin32 to see messages. When testing is complete, you should comment out or delete the debug strings. The Windows debugging utility dbwin32.exe is included with CQ for Windows. This method is available for both VBScript and PERL, but doesn't work for the web interface.
  myfield_value = GetFieldValue("MyField").Getvalue
  sessionObj.OutputDebugString "The value of MyField is " & myfield_value & "." & vbCrLf
Microsoft Script Debugger (Client or web, Windows only, PERL or VBScript)
You can use the Internet Explorer debugger to debug your hook code. You can download and install this debugger at the following address: http://msdn.microsoft.com/scripting. Run a search for "Microsoft Script Debugger" to find the download. Click on the "Microsoft Script Debugger" link for debugger help. A hook runtime error launches the debugger (if it is not launched, you need to read the debugger documentation). To force the debugger to launch, add a "stop" statement to your VBScript hook code, and the debugger launches at that point. However, I have NOT been able to get this to work properly.

Microsoft Development Studio VBScript debugger (Client only, Windows only, VBScript only)
General debugging of VBScript hooks can be done with the Microsoft VBScript Debugger. If you have Microsoft Visual Studio installed, you can use its VBScript debugger to debug your hook code.

Table of Contents





Set up detailed history (AuditTrail) of records.
Updated: 06/14/06
Most CQ record types already have built-in history. However, that history only tells that a change was made, and not what was changed. Whether or not you choose the AuditTrail package or implement it manually, unfortunately detailed history cannot be set up for built-in record types, such as "users". Note that even though you may implement detailed history, the built-in history is still useful for things like queries where you want to filter on action.timestamp.
As of 2003.06.13, CQ has a package called AuditTrail. That package will create a new tab that simply keeps track of who changed what field from what value to what value and when. Below is some hook code that implements a similar thing manually. The package has some up sides and some down. One benefit it has over the hook code is that it will only record what line changed in a multiline text field and tell you the line number. This is a nice feature to have if minor changes are made to large blocks of text. Unfortunately, I discovered a downside. It can only handle a certain number of fields. I connected it to a record type that is small and worked as advertised. But, when I connected it to a record type that is very large, got an SQL error message stating "too many fields". That problem can't be overcome by adding a package customization to exclude certain fields from the report because the customization to exclude fields only excludes them from the end report. You still get the SQL error. The AuditTrail package supports customizations for excluding fields, formatting, and the ability to disable (not remove) the audit trail. Another downside is that if while modifying a record you edit a field, but then immediately set it back to the original value, the AuditTrailLog will show that field as being modified, but with the "old" and "new" values the same.
The following hook code will record detailed history as well. Since there are many more fields in a record type than which you'd like to track history, only fields listed in a dynamic list are considered. The following code is run from an action hook's Validation:

	' ''''''''''''''''''''''''
	' Update the detailed history.
	' This wasn't placed in the All_actions action Validation because we
	' need the actionname to indicate the real action.
	Detailed_History_Capture "Defect", actionname, Defect_Validation

In turn, that hook calls the following global script:

' ''''''''''''''''''''''''''''
' This subroutine will record the old and new values in the
' Detailed_History field if the field value has changed.
' The TR, Defect, ST, and CN record types have their field
' lists in dynamic lists.  All other record types will record
' all field values except those explicitly listed below.
sub Detailed_History_Capture(record_type, action_name, validation_message)

	' IMPORTANT NOTE: This script is called at the action Validation stage.  Becuase changes
	' in refrence-type fields, such as the CN/Defect parent/child relationship, aren't actually
	' performed until the Commit stage, those changes won't show up in the Detail History.
	' It is unknown how to get around this and record changes in reference lists.

	' Don't capture detailed history for the Submit, Import or Delete actions.
	action_type = GetActionType
	if action_type = AD_SUBMIT or action_type = AD_IMPORT or action_name = AD_DELETE then
		exit sub
	end if

	' The "validation_message" variable will only be non-null if there is a validation error.
	' But, for an unknown reason, during the Duplicate and Unduplicate actions, while there
	' is no error message, "validation_message" contains a non-sensical string.  So, the "if"
	' statement here is pretty kludgy to circumvent this problem.  Also, in the web interface,
	' the "validation_message" variable has a series of question marks.  So, a check is made
	' for that as well.  This assumes that no "real" error message has exactly 47, 43 or exactly
	' 14 characters.
	length = len(validation_message)
	if ( validation_message <> "" and length <> 47 and length <> 43 and length <> 14 ) then
		exit sub
	end if


	dim existing_history, field_list, fieldname, orig_value, new_value, history_entry
	dim fullname, date, field_changed, x


	' Build the list of field names that are to be monitored.
	set sessionObj = GetSession
	if record_type = "Test_Requirement" then
		listname = "Detailed_history_TR"
	elseif record_type = "Defect" then
		listname = "Detailed_history_Defect"
	elseif record_type = "ScriptTotals" then
		listname = "Detailed_history_ST"
	elseif record_type = "CodeNotification" then
		listname = "Detailed_history_CN"
	else
		listname = ""
	end if
	if listname <> "" then
		field_list = sessionObj.GetListMembers(listname)
	else
		field_list = GetFieldNames
	end if


	' Initialize some variables.
	field_changed    = 0	
	fullname         = sessionObj.GetUserFullName
	date             = Now
	existing_history = GetFieldOriginalValue("Detailed_History").GetValue
	history_entry    = "**************************************************************" & vbCrLf & "DATE:   " & date & vbCrLf & "ACTION:   " & action_name & vbCrLf & "USER:   " & fullname & vbCrLf


	' Loop through the fields.  Add changed fields to the new history entry.
	for each fieldname in field_list
		if fieldname <> "" and fieldname <> "Detailed_History" and fieldname <> "history" and fieldname <> "Old_Names" and fieldname <> "Note_Entry" and fieldname <> "Notes_Log" and InStr(fieldname,"Modified_") = 0 then
			orig_value = GetFieldOriginalValue(fieldname).GetValue
			new_value  = GetFieldValue(fieldname).GetValue
			if orig_value <> new_value then
				history_entry = history_entry & vbCrLf & "FIELD NAME (" & fieldname & "):" & vbCrLf & "   OLD VALUE:   " & orig_value & vbCrLf & "   NEW VALUE:   " & new_value & vbCrLf
				field_changed = 1
			end if
		end If
	next


	' Change the Detailed_History field if necessary.
	if field_changed then
		SetFieldValue "Detailed_History", history_entry & vbCrLf & existing_history
	end if

end sub

Table of Contents



Copy hook code from one location to another.
Updated: 08/14/06
It's possible to copy code out of a schema hook and paste it elsewhere. However, don't grab the grayed-out lines. Those lines cannot be copied and will prevent the "Copy" command from working. Unfortunately, there is no way to "export" hooks from a schema.
NOTE: This was fixed in CQ 7+. It now allows you to copy the grey code lines as well.

Table of Contents





Add custom PERL modules to cqperl.
Updated: 09/13/06
When you run PERL hook code inside a schema, all the modules in the CQ installation are loaded for use. So, if you want the module available in the schema, you can either add it as a global script or add it as .pm PERL module in "C:\Program Files\Rational\common\lib\perl5\5.6.1".
The problem with adding it as a global script is that the module will get loaded every time any record is accessed, whether or not the module is needed, which can be an unnecessary overhead. The alternative to that is to add it to a record script. That is, you're transferring the module over the network each time a user logs into CQ.
The problem with adding it to the external directory is that you would need to add it to every machine that has CQ installed. This can be built into the install as a custom feature, but is a pain after the fact, especially if you have hundreds of users.
So, if you can't add the module as part of the installs before it gets rolled out to all the users, I would recommend adding it as a global script.

Table of Contents





Check a Windows registry setting.
Updated: 04/21/11
CQ version: 7.0.1
The following code can be used check Windows registry keys. The use in the example is to check whether or not the Windows fat Client user has enabled their email notification. Note that the email registry information can be found in different places depending on the CQ version. The key locations listed below are from newest to oldest CQ version. There is an easier way to do this with $session->IsEmailEnabled, but this is just an example.
	require Win32::Registry;
	@keys = (	"Software\\Rational Software\\Email\\SendMail",
			"Software\\Rational Software\\ClearQuest\\7.0.0\\SendMail",
			"Software\\Rational Software\\ClearQuest\\2003.06.00\\SendMail"
		);
	$key_not_found = 1;
	foreach $key (@keys) {
		eval {
			$main::HKEY_CURRENT_USER->Open($key,$keyref);
		};
		if ( ! "$@" ) {
			$key_not_found = 0;
			last;
		}
	}
	if ( $key_not_found ) {
		return "\nEmail notification must be enabled.\n";
	}

	$keyref->GetValues(\%pairs);
	foreach $key (keys %pairs) {
		if ( $key eq "SendActive" ) {
			if ( $pairs{$key} ne "1" ) {
				return "\nEmail notification must be set as active.\n";
			}
			next;
		}
		
		etc ...
	}

Table of Contents





Ensure all hooks run in nested actions; CQHookExecute.
Updated: 03/06/09
CQ version: 7.0.1
When an action on a record type is executed, each of the hook scripts are executed in order: Permission, Initialization, Validation, Commit, and Notification. But, if a record is submitted or modified from within an action hook of a different record, the Permission and Notification hooks don't run.
The Permission hook doesn't run because internal code is run at the super user level, which already has all the necessary rights.
The Notification hook doesn't run because that hook is designed to send out emails, but can run other code too. The logic is that you don't want multiple emails to be sent out for what looks like a single action to the user.
If you want to override the default behavior, from within the parent hook script set CQHookExecute. When the action is performed on the child record, the Permission and Notification hooks will execute.
	$session->SetNameValue("CQHookExecute","1");
WARNING: Because that is a session variable, it will be effective for all edits of all record types. For that reason, it's best to set it back to zero in the parent script when the child has finished its action.

Table of Contents





Get a list of entity defs.
Updated: 04/07/09
The API has an EntityDefs object, but it is unknown how that object is obtained. As an alternative to that API call, the following substitute can be used.
	$entityDefNames = $sessionObj->GetEntityDefNames;
	foreach $record_type (@$entityDefNames) {
		$entityDefObj = $sessionObj->GetEntityDef($record_type);
	}
Table of Contents



Access a website from with CQ.
Updated: 05/21/09
CQ can interact with web pages. This is useful if another system posts on a website the status of data that CQ needs.
	use LWP::Simple;
	$result = get("$url");
Note that this "get" doesn't return any errors. Either the $result is there or it isn't. That is, there is no way to differentiate between the website be contacted, but not returning any data, and not being able to contact the website at all.

Table of Contents





Determine the name of the current user db.
Updated: 06/24/09
	$dbname = $session->GetSessionDatabase->GetDatabaseName;

Table of Contents





Determine the current record type/name.
Updated: 04/07/10
	$record_type = $entity->GetEntityDefName;

Table of Contents





Get a list of attachments.
Updated: 04/22/10
$att_fields = $entity->GetAttachmentFields;
for ( $x = 0; $x < $att_fields->Count; $x++ ) {
	$atts = $att_fields->Item($x)->GetAttachments;
	for ( $y = 0; $y < $atts->Count; $y++ ) {
		$att		= $atts->Item($y);
		$name		= $att->GetFileName;
		$short_name	= ($name =~ m/([^\\\/]+)[\\\/]?$/);
	}
}

Table of Contents





Determine the name of the current dbset.
Updated: 04/13/11
	$dbset = $session->GetSessionDatabase->GetDatabaseSetName;

Table of Contents





Programmatically determine which entry is selected in a list box.
Updated: 04/21/11
Version: CQ 7.0.1
List Box and List View controls allow a user to select (click on) an entry for the purpose of then clicking a button, such as Remove.
The code for the "New", "Add", and "Remove" buttons is hidden from CQ admins. However, you can create your own custom button by choosing "Other" in the form control and then creating a record script for it.
The trick then is to know which entry the user has chosen (clicked on) in the field. The API provides an EventObject ListSelection call for that purpose. Unfortunately, that call doesn't work in the web, nor does it work for Perl.

Table of Contents





In Perl, ensure all values in an array are unique.
Updated: 04/22/11
	my $temp;
	my @unique_list = grep(!$temp{$_}++,@original_list);

Table of Contents





Find all back reference fields in a schema.
Updated: 05/25/11
The loop excludes the built-in "ratl_" fields.
my $entityDefNames = $session->GetEntityDefNames;
foreach $record_type ( @$entityDefNames ) {

	print "$record_type\n";

	# Loop through each field in the entitydef.
	my $entityDefObj	= $session->GetEntityDef($record_type);
	my $fieldnames		= $entityDefObj->GetFieldDefNames;
	foreach $fieldname (@$fieldnames) {

		next if ( "$fieldname" =~ /^ratl_/ );

		my $fieldtype = $entityDefObj->GetFieldDefType($fieldname);
		if ( $fieldtype == 5 || $fieldtype == 6 ) {
			print "\t$fieldname\n";
		}
	}
}
Table of Contents



Get a list of dynamic lists.
Updated: 07/07/11
	$listnamesRef = $session->GetListDefNames;
	foreach $list_name (@$listnamesRef) {
		print "$list_name\n";
	}
Table of Contents



Set another field's choice list.
Updated: 09/08/11
	$entity->SetFieldChoiceList(fieldName, choiceList);
Ex:
	$entity->SetFieldChoiceList("Name",\@names);
Table of Contents



Only run the hook in the client interface.
Updated: 09/20/11
This is useful if you need to run some client-only function, like MsgBox.
	return if ( $session->HasValue("_CQ_WEB_SESSION") );
Table of Contents



Do a case-insensitive sort for a choice list.
Updated: 05/03/12
The built-in Perl command "sort" performs an ASCII sort, which means that the returned list will be sorted in ASCII table order. Unfortunately, ASCII lists all upper-case letters before lower-case letters, which can return a list that isn't sorted well for human consumption. The following will sort the data in a case-insensitive manner.
	@choices = sort {lc($a) cmp lc($b)} @unsorted_data;
Table of Contents



Compare dates subroutine.
Updated: 05/30/12
sub CompareDates {

	# This script will compare the two dates provided to it.
	# It requires three arguments inside of one string.
	#	date/time 1
	# 	comparison operator:  <, =, or >
	# 	date/time 2
	# It will return a 1 if the equation evaluates to true,
	# or a 0, if it does not.

	# The dates must be in a standard format of:
	#	yyyy-mm-dd hh:mm:ss		# 24-hr format, as returned from the ClearQuest GetFieldValue API method
	# - or -
	#	mm/dd/yyyy hh:mm:ss		# 24-hr format, as returned from the ClearQuest GetCurrentDate global script
	# - or -
	#	WWW MMM {d}d hh:mm:ss yyyy	# 24-hr format, as returned from the Perl localtime function

	# Usage:
	#	if ( CompareDates("$migration_signoff < $migration_date") ) {
	#		$error = "The Migrator Sign Off Date cannot pre-date the Migration Date.\n";
	#		...

	my $script;
	($script = (caller(0))[3]) =~ s/^main:://;

	my %months = (
		"Jan"	=> 0,
		"Feb"	=> 1,
		"Mar"	=> 2,
		"Apr"	=> 3,
		"May"	=> 4,
		"Jun"	=> 5,
		"Jul"	=> 6,
		"Aug"	=> 7,
		"Sep"	=> 8,
		"Oct"	=> 9,
		"Nov"	=> 10,
		"Dec"	=> 11
	);

	# Parse the input.
	my $input = $_[0];
	my($date1,$operator,$date2);
	my($year1,$month1,$day1,$hour1,$min1,$sec1,$year2,$month2,$day2,$hour2,$min2,$sec2);

	if ( "$input" eq "" ) {
		exit_error("CompareDates syntax ERROR: Requires a date comparison string and got nothing.\n","CompareDates syntax error");
	}
	if ( "$input" =~ /^(.+) ([\<\=\>]) (.+)$/) {
		$date1		= $1;
		$operator	= $2;
		$date2		= $3;
	} else {
		exit_error("CompareDates syntax ERROR: Input argument not formatted correctly: $input\n","CompareDates syntax error");
	}

	if ( $date1 =~ /^(\d{4})\-(\d{2})\-(\d{2}) (\d{2})\:(\d{2})\:(\d{2})$/ ) {
		$year1	= $1;
		$month1	= $2 - 1;
		$day1	= $3;
		$hour1	= $4;
		$min1	= $5;
		$sec1	= $6;
	}
	if ( $date1 =~ /^(\d{2})\/(\d{2})\/(\d{4}) (\d{2})\:(\d{2})\:(\d{2})$/ ) {
		$year1	= $3;
		$month1	= $1 - 1;
		$day1	= $2;
		$hour1	= $4;
		$min1	= $5;
		$sec1	= $6;
	}
	if ( $date1 =~ /^(\w{3}) (\w{3})\s+(\d{1,2}) (\d{2})\:(\d{2})\:(\d{2}) (\d{4})$/ ) {
		$year1	= $7;
		$month1	= $months{$2};
		$day1	= $3;
		$hour1	= $4;
		$min1	= $5;
		$sec1	= $6;
	}
	if ( "$year1" eq "" ) {
		exit_error("CompareDates syntax ERROR: Date1 argument not formatted correctly: $date1\n","CompareDates syntax error");
	}

	if ( $date2 =~ /^(\d{4})\-(\d{2})\-(\d{2}) (\d{2})\:(\d{2})\:(\d{2})$/ ) {
		$year2	= $1;
		$month2	= $2 - 1;
		$day2	= $3;
		$hour2	= $4;
		$min2	= $5;
		$sec2	= $6;
	}
	if ( $date2 =~ /^(\d{2})\/(\d{2})\/(\d{4}) (\d{2})\:(\d{2})\:(\d{2})$/ ) {
		$year2	= $3;
		$month2	= $1 - 1;
		$day2	= $2;
		$hour2	= $4;
		$min2	= $5;
		$sec2	= $6;
	}
	if ( $date2 =~ /^(\w{3}) (\w{3})\s+(\d{1,2}) (\d{2})\:(\d{2})\:(\d{2}) (\d{4})$/ ) {
		$year2	= $7;
		$month2	= $months{$2};
		$day2	= $3;
		$hour2	= $4;
		$min2	= $5;
		$sec2	= $6;
	}
	if ( "$year2" eq "" ) {
		exit_error("CompareDates syntax ERROR: Date2 argument not formatted correctly: $date2\n","CompareDates syntax error");
	}

	# Get the integer equivalent of the dates.
	use Time::Local;
	my($date1epoch,$date2epoch);
	eval {
		$date1epoch = timelocal($sec1,$min1,$hour1,$day1,$month1,$year1);
	};
	if ( "$@" ne "" ) {
		exit_error("CompareDates date1 has a syntax problem:\n$@\n","CompareDates syntax error");
	}
	eval {
		$date2epoch = timelocal($sec2,$min2,$hour2,$day2,$month2,$year2);
	};
	if ( "$@" ne "" ) {
		exit_error("CompareDates date2 has a syntax problem:\n$@\n","CompareDates syntax error");
	}

	# Perform the comparison.
	if ( ($operator eq "<" && $date1epoch < $date2epoch) || ($operator eq "=" && $date1epoch == $date2epoch) || ($operator eq ">" && $date1epoch > $date2epoch) ) {
		return 1;
	} else {
		return 0;
	}
}
Table of Contents



Get a listing of all fields in given record type.
Updated: 06/04/12
Version: 7.0.1.8
	$entitydef_o	= $session->GetEntityDef("record-type");
	$fieldnames_ref	= $entitydef_o->GetFieldNames();
	foreach $field (@{$fieldnames_ref}) {
		...
Table of Contents



Determine if a record is being changed in batch update mode.
Updated: 07/30/12
Version: 7.1.2
	$is_batch_update = $session->GetNameValue("ratl_MultiModifyBatchMode");
Table of Contents



Programmatically add/remove entries to/from reference list fields.
Updated: 01/08/13
Version: 7.1.2
Note that the linked record must be referenced by its unique id. The field(s) that constitute the unique id are defined in the schema for each record type. If more than one field is used to construct the unique id, the values are space separated in the order in which they are listed in the unique id definition in the schema.
Add:
	$entity->AddFieldValue("FieldName","UniqueID");
Remove:
	$entity->DeleteFieldValue("FieldName","UniqueID");
Table of Contents



Programmatically get the members of a dynamic list (named list).
Updated: 06/27/16
Version: 7.1.2.14
Note that I've used this session method successfully, but it doesn't appear in the CQ API manual.
	$members_ref = $session_o->GetListMembers("MyNamedList");
	print "MyNamedList members:\n";
	foreach $member (@$members_ref){
		print "\t$member\n";
		...
Table of Contents



Licenses
Updated: 12/14/16
Version: 7.x, 8.x

CQ uses a third party license server called FlexLM. Licenses are administered via Start -> Programs -> Rational ClearQuest version -> Rational License Key Administrator.

Different licensing types.

1) Node-locked licenses are only available for the desktop on which they are installed. There is no concept of node-locked licenses timing out.
2) Floating licenses live in a pool on a central server. These licenses timeout only when the user exits the CQ client.
3) Web licenses allow users to "login" via the web and get most of the functionality that a native client would get. Web licenses timeout after 20 mins of inactivity or when the user logs out.
4) The "limited access" web interface allows users only basic access and only one query. There are no licenses required for this interface.
5) Token-based can be used in lieu of floating licenses. Tokens are generic to all IBM products. A given software package, such as CQ, will consume a given number of tokens, which is a much more efficient way to handle licenses across several software packages. Old floating licenses for each of the other software packages can be converted and combined into one pool of tokens.

Have FLEXlm startup at reboot on Windows.

The installation of CQ does not ensure that the license manager will startup automatically after each reboot. Go to Start -> Settings -> Control Panel -> FLEXlm License Manager -> Setup. Select "Use Windows Services" and then "Start Server at Power-Up".

Point to a remote license server on Windows.

Start the Rational License Key Administrator and go to the Settings tab. Simply select "Use single server" near the bottom and type in the hostname of machine that has the FlexLM ClearQuest licenses on it.
If the License Key Administrator isn't installed, install it using the IBM Installation Manager. If you can't install it for some reason, you can update the Windows registry to point a computer to a new license server. Look for keys like the following:
	HKEY_LOCAL_MACHINE/SOFTWARE/Wow6432Node/Rational Software/Licensing/8.0/Server List:
		PortAtHost = 27000@slda302p
	and
		Entry0:
			Server Name = slda302p
Point to a remote license server on a Unix client.
The install manual has you run "license_setup". However, that isn't necessary. Assuming the CQ Client has already been installed, simply set the following environment variable:

LM_LICENSE_FILE   27000@flexlm-license-server

Request permanent licenses.

CQ comes bundled with eval licenses good for 30 days. If you have purchased permanent licenses and would like them sent to you, open the License Key Administrator and fill in the information. Under the License Key(s) tab, select Request Permanent License Key(s). Choose the appropriate product when prompted, such as ClearQuest Web (which is just a special floating license).

Track license usage over time.

The gathering of license usage metrics over time requires a custom script. On Unix, run "lmstat -a". On Windows, run "lmutil lmstat -a". Parse the output and record the date-time and number in use, which can then later be plotted in Excel.
Also, there is commercial utility that may help called Macrovision SAMreport (formerly GlobeTrotter SAMreport).
New in 8.1.4 is a tool built-in tool called "Rational License Key Server Administration and Reporting Tool" that has built-in reports and the ability to create custom reports for both FlexLM and token licenses.

Table of Contents





Determine the version of CQ.
In the CQ Client -> Help -> About ClearQuest...
In the CQ Designer -> Help -> About
Programmatically:
	use Win32::Registry;

	$key = "SOFTWARE\\Rational Software\\ClearQuest";
	eval {
		$::HKEY_LOCAL_MACHINE->Open($key,$parameters_o);
	};
	if ( "$parameters_o" ne "" ) {
		$parameters_o->QueryValueEx("CurrentVersion",$type,$current_cq_version);
		$parameters_o->Close;
	}

	print "$current_cq_version\n";
Table of Contents



Determine installation media type.
Updated: 09/13/06
Installations can be performed via the network or directory from cdrom/dvd. Prior to CQ 7.0, if performing an upgrade, the upgrade must come from the same media type. That is, if you installed directory from cdrom, you need to obtain the cdrom copy of the upgrade.
You can obtain from Rational Support an utility called "MediaType.exe" that will detect and report on the media install types for all installed Rational software.
  C:\  MediaType.exe  .\results.txt
Table of Contents



Batch mode.
Updated: 06/06/16
Version: 7.1.2
Multiple records can be edited in one operation in the client interface.
1) Run a query to isolate the records to be edited.
2) Select/highlight the records to be edited.
3) Run the desired action on the record set.
Only the change(s) made in the first record will be applied to the remaining records.

Change records in batch mode doesn't work in the old web interface. As of 7.1.2, there is sort of a batch mode in the eclipse web interface, though limited. In the new web interface, you can batch edit records, but it only shows you short-string fields. You can't make a batch edit to a checkbox, such as to make a bunch of records inactive.

WARNING: Subsequent records may pick up on unintended changes from the first record. For example, what if the user enters a value in a short-string field and a validation hook takes that value and creates a new stateless record from it whose unique key contains the id of the current record, and then adds a link to that new stateless record to the current record in a reference list field. CQ will see the link into the reference field as a change too even though the user didn't directly make that change. Now, the problem is that CQ will link the stateless record created in the initial edit to all the subsequent records. So, when you using batch mode, be aware that secondary edits to the record get picked up as unintended changes too.
For that reason, I avoid batch updates. If you need to backfill, say, a field in many, many records, write a script to update each one individually. There are several benefits to doing it in a scripted loop:
1) The edits made to one record do not carry over to subsequent records, even if there are secondary changes.
2) If a validation fails, you can log the record's ID and go to the next record, whereas the batch update will halt and wait for you to acknowledge the validation issue.
2) Before committing the record, you can ask CQ what fields were changed this edit. If there are any fields that you weren't expecting to be updated, you can revert the edit, log the ID, and move on to the next record.

Table of Contents





Importing data into CQ.
Updated: 07/25/11
The following procedure is the general way one accomplishes a conversion. It is written using ClearDDTS for examples.
NOTE: Cannot import UCM-enabled records into CQ. CC cannot verify the data. However, you can if the user database name is the same as where the records came from and records came from that database. Basically, you can do an import of UCM-enabled records if restoring from backup in a non-standard manner (normal restores simply restore the old database that already contains the records).

NOTE: If importing CQ records into a new database, you must have a field in the new database to contain the old IDs of the imported records. That old id is used when importing updates to the same records, such as is done when importing the history, attachments, or duplicate data.

NOTE: If you have records in CQ that are locked down to a certain group via hook permission script, those permissions do not apply during import. In fact, it's important to note that field hooks will not fire when importing via the CQ Import Tool. Only the action Initialization and Valiation hooks will fire.

Plan your requirements.
Know the requirements, choice lists, recored types, forms, state transititions and actions that need to be managed in CQ. That is, examine the fields in the old defect tracking system and determine which ones will be brought forward. At this time, determine if those fields need to be mapped to new names etc...

Export the data.
Export records from the old system into ascii import files. This is the step that you map the fields to their new names, if any. The import file is a series of rows, one for each record. The very first row is a list of fields to be imported. Each record must have an entry for each field. If the field is empty, use "<<None>>" or "<<Unassigned>>". Each record entry must be enclosed in double quotes. If there are double quotes embedded in a string, enclose them in an additional set of double quotes. Each field is delimited by a comma, tab, pipe or semi-colon. Records are separated by newlines. Fieldnames that have no proper mapping are ignored. Records in states that have no proper mapping are placed in the Submitted state. Do not insert any spaces before or after the field delimiters. Items in a reference list must be comma separated within the double quotes. Dates for import files must have one of the following formats:
"6 April 1999"
"April 6, 1999 8:30.00"
"8:30.00 Apr 6 1999"
"4/6/1999 8:30.00PM"
The export phase is completely independent of CQ. That is, this step of the conversion is either written by the admin or is supplied by the other tool. A sample record import file may look like:
"id","state","submitdate","severity","project","headline","phone", ...
"00010","Submitted","April 6, 1999 8:30.00","3-Workaround","proj1,proj2","This is only a ""TEST"" of DDTS.","480-123-4567", ...
"00141","Opened","1/4/1997 11:32.00","1-Critical","myproj,yourproj,ourproj","Help!  I've fallen and I can't get up.","<<None>>", ...
Each history is its own row in a history import file. The first field in a row is the original ID. The example for histories applies to any enclosure. A sample history import file may look like:
"id","timestamp","user_name","action_name","old_state","new_state"
"00010","Apr 6, 1999 8:30.00","ejo","transition","verified","closed"
"00141","4/6/1999 8:30.00PM","ejo","This is a history test line","<<None>>","<<None>>"
Each record's attachments are on separate lines. If the old system allows more than one attachment per field name, the file paths can be comma separated within the double quotes. A sample attachment import file may look like:
"id","attachment","hostfile"
"00010","\\hostname\share\ddts\allbinaries\EJOcc00010\ejo_cshrc.txt","C:\temp\hosts"
This implies that the import is done so that CQ can see the original attachment. If it's impractical or impossible to see the originals directly from the import site, the attachments will have to be moved ahead of time to a temporary location, as in the /etc/hosts example.
NOTE: When re-importing (updating) a record, any change in a field's value will overwrite the value currently in the existing record, if any. However, if re-importing attachments, CQ will create a duplicate attachment that doesn't overwrite the original.

Create a schema.
Create a schema that contains all the fields, choice lists, actions and state transitions as needed to support the imported data. Create the map if necessary. Also, in creating the new schema, create the proper forms and databases. Conduct appropriate tests.
Since CQ creates a new record ID for each imported record, create a field called "OriginalID" or something to hold the old record IDs. The data types for the imported field values are determined by the type of the field into which they are being placed. That is, the import file has only text, but the imported data will acquire a new type, such as integer, upon import.

Test the conversion.
Export a small sample of data from the old system to a text file that uses the CQ import format. Use the ClearQuest Import Tool to move the test data into CQ. Test the results and iterate as necessary.
Separate export files must be created for records, history and attachments.
If errors occur during import, CQ creates an error.txt in your TEMP directory that contains information about all failures. Also, unimported data is placed in a file you specify during the import. After identifying and correcting the error in that data error file, simply rerun the import on that data error file to complete the process.

Perform the real conversion (import).
Export all desired data from the old system and run the CQ Import Tool on it. Repeat the process for history, attachments and duplicates if necessary. In CQ 2002+, CQ history and attachments can be imported at the same time as the data records.

Migrate CC integration info.
If the former defect tracking system was inegrated with CC, such as DDTS, one needs to add CQ hyperlinks to the appropriate CC versions. In the CQ Client, create a new report based on a query whose only columns are original-DDTS-ID, CQ-record-type and CQ-record-ID with no filtering. Send the report results to a file using "Record syle (columns of values)" format. Next, write a script that searches each integrated VOB for Attributes of type FIXES and add a CrmRequest hyperlink to each CC version. This implies that you've already run the ClearQuest Integration Configuration via the ClearCase Administration startup menu on Windows. The resulting describe on those versions would look like:
  ...
  Attributes:
    FIXES = "DDTS-bugnumber"
  Hyperlinks:
    CrmRequest "record-type" -> "CQ-bugnumber"
Table of Contents



Patch CQ.
Updated: 04/06/11
Patches can be downloaded from http://www.ibm.com/support/. Once there, go to Support & Downloads and look for "fix packs". Download the appropriate fix pack (msp). Note that some of those can be very large. Also note that some fix packs have dependencies on other fix packs. For example, if you want to apply 7.0.1.12 to a installation that has 7.0.1.2 installed, because of the dependencies you will need to first install 7.0.1.7, then 7.0.1.11, and then 7.0.1.12. If the release hasn't been patched yet, you can often go straight from, say, 7.0.1.0 to 7.0.1.12.
Use the command line "msiexec" to patch a release area. On the fix pack pages, there is usually a link to a technote that explains how to apply the patch. Look for that link and follow the instructions. You'll need to read the instructions to at least know which msi file is the appropriate one to patch.
msiexec /a <complete path to the msi in the release area> /p <complete path to the msp file> <UI switch> /qb /lv* <complete path to the log file>

Ex:
msiexec /a C:\cq_release\SETUP\1033_ClearQuest.msi /p c:\temp\7.0.1.12-RATL-RCQ-WIN-en-US-FP12.msp /qb /lv* c:\temp\install.log
Note that while the install help page explicitly tells you not to apply the patches to a remote machine, they are referring to patching an "installation" on a remote machine. It works fine if you are just patching a release area on a remote share. That is, in the example above, instead having to be logged on local to the release area's machine and use "C:\..." as the path, you can use UNC paths to a remote share, as in "\\remote_box\share_name\path\...". However, if it's possible to log onto the machine where the patch (msp) was downloaded and release area (msi) is, do so, as the process will be MUCH faster.

WARNING: If patching a release area using msiexec using UNC path names, ensure that none of the strings involved exceed the Windows 256 character limit, or the patch will not be applied correctly and you'll have to start a new release area. Instead of using direct UNC paths, as in "\\remote_box\share_name\long_path\release_area_parent_dir...", map that to a local drive letter first, as in "Z:\release_area_parent_dir".

Table of Contents





Print from CQ.
In the Designer or Client, if one has a printer connected to the machine serving CQ, simply go to File -> Print (or Print Setup...). The Print page has an option to "Print to file" that sends the output to Output.prn. However, the format within the file is unprintable and is considered a bug (RAMBU00011487) as of CQ 2.0 P2.
On Windows, you can use Crystal Reports to create a "single report" format and put it in the CQ public folder. Users can then print a single record via Crystal. In the out-of-the-box SAMPL database there is a report format called "Defect Detail (All)" that you can use as an example. Because your schema is probably very different from the SAMPL schema, you won't be able to simply export that report format and import it into your user db.
On UNIX, there is a simple text-based reporting tool from which you can print.

Table of Contents





Index history records.
Updated: 09/13/06
Version: 8.0.1.14
When a CQ schema is applied to a database, a set of default indexes are created for each record type (table). However, that set is insufficient to maintain good performance. You'll need to create indexes for frequently queried fields.
To get a listing of existing indexes:
	SELECT TABNAME,INDNAME,COLNAMES FROM SYSCAT.INDEXES WHERE TABSCHEMA NOT LIKE 'SYS%' ORDER BY 1,2;
Even if a field (column) is queried often, if it only has a very small number of possible values, indexing it probably won't help performance. For example, if a field has two values "Yes" or "No", even if the field is included in every query for that record type, adding an index for it isn't going to help performance. In fact, keeping it indexed may actually hurt performance for no reason. The DBAs call that situation "low ordinality".

Run the following in pdsql to create an index. The name "company_name_idx" is arbitrary, but must be unique.
  create index company_name_idx on company (name);
As of CQ 7.0, history records will already be indexed in databases. See: http://www-1.ibm.com/support/

Table of Contents





Performance.
Updated: 12/21/15
Version: 7.1.2
If dealing with a remote database, the web will perform better than the Client. If dealing with a local database, the Client will perform better. In either case, there are several things that can be changed/avoided to improve CQ web and Client performance.
In addition to the notes here, the following IBM docs will help:
http://www-01.ibm.com/support/knowledgecenter/SSSH5A_8.0.1/com.ibm.rational.clearquest.install_upgrade.doc/topics/was_install/c_incr_jvm_setng.htm?lang=ru
http://www-01.ibm.com/support/docview.wss?uid=swg27023770&aid=1

Reduce the size of the Submit form.
Using a single form for Submit and Record will result in too much information being passed to the user during Submit. At the very least, take advantage of the ability in CQ to have a separate Submit form. At the most, only show fields absolutely necessary for the Submit action on the Submit form.

Control the size of user databases.
Over time, user databases can become quite large. In addition to the number of records a user sees, there are Keyword, User, History, and many other stateless records associated with the main records. For example, a database that has tens of thousands of records entered by users, is actually maintaining hundreds of thousands of entries. While enterprise solutions such as Oracle and SQL Server can handle very large databases efficiently, CQ maintains relationships (parent-child) between records that force many queries to be run each time a single user query is executed. For this reason, performance comparisons to other very large databases, such as company information, cannot be made. While keeping those parent-child relationships is not the most efficient way to store data in a database, it provides a CQ schema with functionality that would not otherwise be available.
There are different methods by which the sizes of databases can be controlled. First, consider splitting user databases along project/product lines. Even if the schema is identical for different products, if the developers and management of those various products don't need to see the other products' information on a daily basis, use separate user databases.
Second, consider creating a new user database as the old one fills up. At a designated point in time, new records are submitted to the new user database. As the old records are closed, the old database will be phased out.
Third, move completed records to a permanent storage database. So that users continue to submit records to, and deal with, the same user database all the time, the CQ administrator can export completed records and their associated history to an "archive" database. This would only need to be done rarely.
Fourth, consider deleting old history records using "installutil scrubhistory".
Fifth, consider programatically deleting old, unused records, which deletes history records too.

Employ CQMS
If the performance problem is determined to be the network and not the schema itself, consider using ClearQuest MultiSite. CQMS requires an extra license for every user utilizing a replicated database, but the cost savings in increased productivity and user satisfaction would make up for that. CQMS was released by itself in Fall, 2001 and comes bundled with CQ 2002 as part of a Custom install.

Avoid starting AdminSessions from within hooks.
If an AdminSession is started within a hook, CQ must load all the DLL's and assorted information into that session, which takes just as long as if somebody was starting the CQ Client from start. An example of this is the need to alter user information. While user information can be efficiently retrieved with queries, altering user information requires an AdminSession.

Avoid using too many field Validation hooks.
Every field validiation hook runs every time any field value changes. If there are too many, it can affect performance. If performance is a problem and there are too many field Validation hooks, consider placing all the validations in an appropriate action hook. While the user doesn't get instant feedback if an incorrect value is entered, the improvement in performance may be worth that inconvenience. At the very least, avoid putting queries or looping in a field validation hook.

Avoid using "Recalculate Choice list".
Similar to Validation hooks, if a field's Chioce List properties have "Recalculate Choice list" set, every recalculation runs every time any field value changes. If there are too many of these, performance can suffer. See Dependent Choice Lists for alternatives.
Also, if you ever want to programatically change the values in the choice list dynamically, it won't work. Even if you explicitly SetFieldChoiceList to the new list, every time you interact with the field, because that switch is set, it will recalculate the choice list, which invalidates the values you just put there explicitly.
If you need to "recalculate" a choice list, use InvalidateFieldChoiceList from the other location that affects the contents of the choice and don't use the "Recalculate Choice List" property.

Reduce the number of users in a database.
Avoid simply importing your entire domain into the CQ schema with everyone inactive and then activating only those users that need access to given databases. While this may be a very efficient for the CQ admin to populate user databases with user information, it puts too many useless records in the database. Domains can contain thousands of users, while only a few hundred actually need CQ access. While it's a little more work, find a way to parse the company's domain list to only relevant personnel prior to import.

Ask users to avoid using the mouse scroll wheel in the web interface.
If a pulldown list on a field has a Value Changed hook associated with it, that Value Changed hook is executed every time a new value is selected in that field. Unfortunately, if a user selects a value in a pulldown list in the web, and then uses the mouse scroll wheel to search the remainder of the list, the scroll wheel is actually "choosing" each value as it scrolls past. Because the scroll is actually choosing each value as it scrolls past, the Value Changed hook associated with the field is being executed perhaps dozens and dozens of times.

Match the lengths of types of fields to the needs of your data.
Defining field lengths to match the needs of your field can reduce the volume of data passing between the application and the database severs. This can be particularly beneficial when you work on networks with high latency. It can also help reduce your database size and the amount of memory allocated per field. Using the correct data type for fields (for example, using a SHORT_STRING instead of a MULTILINE_TEXT, or using INT instead of SHORT_STRING) can also help general database performance. Each database vendor has different requirements in this area, so refer to your database management system vendor documentation for more information.

Avoid large choice lists.
Use a hook to only provide those choices relevant to the action, user, project etc...
If you have a large number of users, consider assigning them to CQ projects, not for the purpose of security per se, but rather to use that as a filter when constructing a choice list of users.
Keep choice lists as small as feasible.
Allowing users to choose from a pre-determined list of values ensures uniformity in record entries. However, very large lists can hinder performance and user acceptance. This precaution is mostly directed at the web interface, but can affect the Client as well.

Centralize the construction of large lists of users.
If your system has a large number of users (thousands?) and those users are in various pulldown choice lists, it's advantageous to query for and generate that list of users once and then put the result in a session variable.
1) In a global script, create a query for users. Join the list of users into a single string with a delimiter such as "#". Put that string into a session variable.
2) At the top of the global script, test if that session variable has a value. If so, retrieve the value, parse the string, and return the choice list. If not, run the query, set the session variable, and return the choice list.
3) In each field choice list hook that needs that list, just call the global script. Doing this will reduce the number of times the query needs to be run.

However, if there are multiple needs for that long list of users, you should probably take a hard look at the schema architecture and decide if there is any other way to avoid that long list. That long list is going to cause performance issues even if you centralize the generation. One option is to have the user first choose an application name or other entity and have the list of users associated with that application defined on the application's record. Then, populate the user choice lists with just those relevant users. This is just one example of how to reduce the size of that large user list; there are probably many others.

Limit the number of fields tracked in the Audit Trail package.
Dependent choices lists are those whose choices depend on the value of another field. Limit the depth of dependents to only about two or so. Deep, cascading choice lists hurt performance.

Limit the user of dependent choice lists.
By default, the Audit Trail package tracks changes to any field, except those changed as part of an action's initialization phase. Review the fields for a given record type and only track changes to interesting fields. To exclude fields, see Customize the AuditTrail package.

Avoid use of explicit LoadEntity and GetEntity in the schema.
If data is needed from another record type, it's much better to run a query and retrieve only those fields that are desired. While it takes a bit more coding to do it that way, there is a definite performance boost by using a query instead of a GetEntity. GetEntity retrieves all columns of the other record, plus it loads all columns of all records directly referenced by the other record .... which results in a lot of unused data being pulled in.
The only time you really need to do a GetEntity is if you intend to edit the other record. LoadEntity is only necessary if the other record was edited and then needs to be edited again, perhaps to transition to yet another state.
Also, instead of using an API query, you can construct a SQL query. That has the benefit that you can add "WITH UR" at the end of the query, which improves query performance. But, it has the downside that the field names it uses are the database column names and not the CQ field names. If a new field is added to replace and existing one that has the same CQ name, an API query wouldn't be changed. However, the SQL query would still be pointing to the old database column name, which would cause problems unless updated too.

Avoid implicit GetEntity in the schema.
The IBM Rational documentation says that the construct GetFieldValue("parent_record.State") must get the entirety of "parent_record" in the same way as GetEntity does, but my own performance tests cannot detect a performance hit doing it that way. That is, it's very easy to show the performance hit when doing an explicit GetEntity, but the above construct seems to be very fast.
However, again you could just do a query on the parent record and have it return the State.

Avoid excessive use of reference fields.
Allowing a user to simply click on a field to bring up another record is handy. However, note that when the current record is loaded, so is any other record that is references. If you have references from a record to every other type of record, performance can suffer in loading all that data. Use reference fields sparingly.

Don't create circular references between record types.
If two record types need to have references to each other, use the built-in "back reference" functionality. DO NOT create a reference from record type A to record type B and also create a reference from record type B to record type A using different field names. That is a circular reference. There are several downsides if you do that. It will be VERY difficult to import those records into another database, following fields programmatically will lead to an infinite loop, and queries will load records that have already been loaded, thus degrading performance.

Avoid excessive use of images, especially large ones.
CQ customizations allow an image to be applied to a form. While making a form more user friendly by adding icons and images, avoid too many images, and don't use large images. Bitmap images usually don't need to be more than a few hundred KBs at the most.

Avoid excessive use of global scripts.
Functional code can be assocaited with a field, an action, a record type, or globally. When a record is loaded, all field hooks, action hooks, and record scripts are loaded, and all global scripts are loaded as well. Only place code into a global script if it's needed across multiple record types.

Avoid calls to the third-party applications from the schema.
The overall system functionality can be improved by including data from external systems. It's possible to make system calls or SOAP server calls from with a schema. However, the wait time while the connection is esablished, queried, and data transferred back can impact performance on the CQ end. Try to avoid those.
As a workaround, consider creating a local record type to hold relevant data from the other system. Create an external script (engine) that queries the other system periodically and updates stateless records inside the CQ database. When the data is needed by a CQ ticket, it's much faster to query data inside the CQ database than it is to go out and access the other system directly.

Ensure heavily used and large tables are indexed.
Database indexing of a table column boosts performance, in that the database doesn't need to perform a full read to find information. In CQ 7 and beyond, columns in tables such as history are automatically indexed.
If you have a table (record type) that has a large number of records and needs to be queried often, have the DBA place an index in the table. Which columns get indexed depends on what information the users typically query and sort. For example, stateful records often get queried by id.
However, too many indexes can also hurt performance. Only index the most heavily queried fields/columns.

Ensure you enclose record and global scripts in "sub".
Scripts are loaded when the user starts the CQ application (Client) and when the user logs in (CQWeb). If the code is not framed as a subroutine, the code will stay in memory even when the user logs out. This can be especially taxing on a web server. Every time anyone logs into the web interface, another instance of the script would be loaded. If the script is framed as a subroutine, the memory is released when the user logs out.
Moreover, not having the code designated as a subroutine can have undesired functional consequences. If the code executes something independent of any record, that execution is possibly happening at an undesired time.

Use query filters instead of parsing in code.
If you need to retrieve information from a specific record, don't run a query that returns all the records, then in a while loop look for the record you want. Instead, it's much more efficient to build a filter into the query that selects the target record.

Consolidate BASE action hooks.
A BASE action hook runs every time any other action is run. Starting a hook reqiures a fork of data. If possible, consolidate multiple BASE actions hooks into one hook. Besides, the order of execution of multiple BASE action hooks is indeterminate. By putting all the code into a single BASE action hook, you can control the flow better.

Avoid use of OutputDebugString.
Writing script progress to OutputDebugString is invaluable when debugging a schema, but is an unnecessary overhead in general. To avoid having to write those strings and remove them after debugging each time, create a PERL constant. The constant can be un/set at the top of a given script or placed inside a global script for the whole schema to use.
	# In the global script.
	use constant DEBUG => 1;

	# In other scripts.
	if ( DEBUG ) { $session->OutputDebugString("ClearCase is MUCH slower than GIT!"); }
When you're done debugging and are ready for acceptance testing and/or production, just reset the constant in the global script.

Table of Contents





Turn on/off tracing for natvie Windows clients.
Updated: 08/12/11
When a connection is opened up to a vendor database to put or get data, an entity in the database called a cursor is created. The cursor should only live as long as the transaction. In certain error conditions, the cursors may not go away as they should. If this happens, you may experience one or more of the following: performance degradation, out of memory on the database server, out of cursors, and/or log files filling up excessively.
Restarting a database instance will clear out the current set of cursors, but probably won't solve the original problem.
The following can be used to trace the opening of cursors to help troubleshoot the problem. Place the following inside a file with a .reg extension and then execute it on the client machine. You'll need to restart the CQ Client to pick up the new registry key.
REGEDIT4

[HKEY_CURRENT_USER\Software\Rational Software\ClearQuest\Diagnostic]
"Trace"="throw;db_connect=2;sql=2;edit;session;api;cursor"
"Behavior"=""
"Report"="MESSAGE_INFO=-1"
"Output"="c:\\temp\\cqnative_trace.txt"
"Name"=""
Other trace values that may help are: Email, hooks, vbasic.
To turn off tracing, simply remove the "...\Diagnostic" registry key and restart CQ.
WARNING: You'll want to turn off tracing after the diagnostics are complete. The "Output" file can grow VERY large.

Table of Contents





Send a message to the user.
Updated: 05/16/11
There are several ways to communicate with end users. Note that if you have a choice of where to put some hook code, I recommend not placing it in a field's Validation hook, but rather in an action Validation hook. While the user won't get the message until they save the record, it is much more efficient. Keep in mind that ANY time ANY field is modified EVERY field Validation hook is executed.

Error messages:
In a BASIC Validation hook either for a field or an action, if you set the fieldname/record-type underscore "Validation" equal to a string, that string will be presented to the user as an error message. Error messages prevent the current record from being saved.
In Perl, those hooks can simply set $result. If the hook returns a non-null value, it's interpreted as a validation error.
The same is true for record scripts associated with button clicks.
Unfortunately, these have the downside that even if the message is just informative and not a true error, the system treats them as validation errors and won't let the action or button click proceed. For example:
Field validation:
	release_value = GetFieldValue(fieldname).GetValue
	if len(release_value) < 10 then
		release_Validation = "Release numbers must be at least 10 characters in length."
	end if

Action validation:
	release_value = GetFieldValue("Release").GetValue
	if len(release_value) < 10 then
		Defect_Validation = "Release numbers must be at least 10 characters in length."
	end if

Button click (calls MyRecordScript):
sub MyRecordScript {
	if ( $session->GetUserLoginName ne "admin" ) {
		$result = "Only the \"admin\" user can perform this button click.\n";
	}
	return $result;
}

Message boxes:
In the client only (this doesn't work in the web), you can use a message box to send the user a non-error message at any time. Message boxes don't prevent the record from being saved. For example:
	release_value = GetFieldValue("Release").GetValue
	if len(release_value) < 10 then
		msgbox "We recommend that Release numbers be at least 10 characters in length."
	end if
Record scripts:
If a button control is associated with a record script and that script returns a string, that string is presented to the user in an error message dialog box. Since it isn't really an error, the record can still be saved. As an example, you might associate a record script called "Bad_Release_Number" with a button called "Validate" that you encourage users to click before saving the record. The record script would have any validation logic you want, and if it finds errors would return a string. Since you can't use a MsgBox in the web interface, this would be a nice way to interact with the user, but unfortunately, a string returned from a record script will only pop up to the user if the record script is associated with a button or as a RECORD_SCRIPT_ALIAS. The pop up doesn't occur if the record script is called with FireNamedHook within a hook.
	Defect_Bad_Release_Number = "We recommend that Release numbers be at least 10 characters in length." & vbCrLf	
Note_Entry: You can simply make a note on the record.
	release_value = GetFieldValue("Release").GetValue
	if len(release_value) < 10 then
		current_note_entry = GetFieldValue("Note_Entry").GetValue
		SetFieldValue "Note_Entry", current_note_entry & vbCrLf & "FYI: We recommend that Release numbers be at least 10 characters in length." & vbCrLf & "-- ClearQuest Administrator" & vbCrLf
	end if
Message field:
An alternative to writing a permanent Note is to set a multiline "message" field with an informational or error message from the current edit. The message will stay there until the record is edited again, when it can be cleared as part of an action initialization hook.

die:
If there is a bad error, perhaps an internal global script syntax error, you can die with an error message at any time. It doesn't kill the user's session, but does end current action. It's a good idea to combine the die error with a custom global script that sends administrators an email, as in the following example. The send_admin_email would gather information about the current user, current action, etc.. and send an email to CQ admins as an FYI.
	my $return = $new_entity->Validate;
	if ( "$return" ne "" ) {
		my $error_msg = "$script_name ERROR: Creation of a Secondary record failed validation.  Please contact the CQ support team with the following error message.\n\n$return\n";
		$session->OutputDebugString("$error_msg");   # This gets sent to dbwin32
		send_admin_email("$error_msg");              # This would send email to CQ admins
		die("$error_msg");                           # This pops up a box for the user
	}
dbwin32:
You can communicate with a fat client user via dbwin32. That is, your schema could $session->OutputDebugString("") messages that you want to inform the user about. If the user is a tester, they may want to see what's going on at a more detailed level.
	if ( $ENV{CQ_DEBUG} ) { $session->OutputDebugString("my message"); }
The environment variable "CQ_DEBUG" isn't necessary. To avoid having the system call out to the environment for every debug statement, that variable check could be placed inside a global script that checks it once and then sets a session variable, then checks the session variable each time.
	debug("my message");

	sub debug {
		if ( ! $session->HasValue("DEBUG") && "$ENV{CQ_DEBUG}" ) {
			$session->SetNameValue("DEBUG","1");
		}
		if ( $session->HasValue("DEBUG") ) {
			$session->OutputDebugString($_[0]);
		}
	}
print:
Within the CQ schema it's possible to open an external file and write to it in the same way any Perl script does. A straight print statement to STDOUT doesn't work, as there is no console to recieve the text.
	open(OUT,"> C:\\temp\\information.txt");
	print OUT "Some data.\n";
	close(OUT);
HTML page:
It's possible to generate and display an HTML page from within the schema. One downside is that it will first pop-up a cmd window with nothing in it. That window can linger up to 10 seconds before the HTML page is displayed. This was only tested from the client and not the web.
$html_file = "C:\\temp\\user_information.html";
$session->OutputDebugString("Generating $html_file content ...\n");

push(@page,"<HTML>\n");
push(@page,"<HEAD><TITLE>Some Data</TITLE></HEAD>\n");
push(@page,"<BODY><FONT FACE='Arial' SIZE='-1'>\n");
push(@page,"<TABLE BORDER='1' CELLSPACING='0' CELLPADDING='0'>\n");
push(@page,"<TR><TH>Login</TH><TH>Fullname</TH><TH>Phone</TH>\n");

@logins = ("eric","john","ostrander");
foreach $login (@logins) {
	$fullname	= join(" ",@logins);
	$phone		= "123";
	push(@page,sprintf("<TR><TD>%s</TD><TD>%s</TD><TD>%s</TD>\n",$login,$fullname,$phone));
}
push(@page,"</TABLE>\n");
push(@page,"</BODY>\n");
push(@page,"</HTML>\n");

$session->OutputDebugString("Opening $html_file.\n");
open(OUTFILE,"> $html_file");
print OUTFILE @page;
close OUTFILE;

$session->OutputDebugString("Running $html_file ...\n");
system("$html_file");
Email:
At any point in the schema you can send end users or administrators emails.
VBScript:

	release_value = GetFieldValue("Release").GetValue
	if len(release_value) < 10 then

		set mailObj = CreateObject("PAINET.MAILMSG") 

		current_user_email = GetSession.GetUserEmail
		mailObj.AddTo(current_user_email) 

		id_number = GetFieldValue("id").GetValue
		subject = "Invalid Release number for Defect " & id_number & vbCrLf
		mailObj.SetSubject(subject) 

		body = "We recommend that Release numbers be at least 10 characters in length." & vbCrLf
		mailObj.SetBody(body) 

		status = mailObj.Deliver

	end if

Perl:
	SendEmail("ejo\@company.com|administrator\@company.com","This is the subject","This is the body.\n");

	sub SendEmail (

		# This global script will send an email to the specified users.
		# It requires the following input:
		#	1) Pipe-separated list of email addresses
		#	2) Subject line
		#	3) Email body
		# It doesn't return anything.

		# Usage:
		#	SendAdminEmail("address1|address2","Subject","Body");

		my $script = (caller(0))[3];
		$session->OutputDebugString("Starting $script\n");

		use Net::SMTP;
		my $smtp_server	= "smtp.server.name";
		my $from	= "CQ_Schema_NoReply\@company.com";

		# Retrieve the input data.
		$session->OutputDebugString("$_[0],$_[1],$_[2]\n");
		if ( scalar(@_) != 3 ) {
			my $error = "\n$script syntax ERROR: Did not get exactly three string arguments.\n";
			$session->OutputDebugString($error);
			die $error;
		}
		my @addressees	= split(/\|/,$_[0]);
		my $subject	= $_[1];
		my $body	= $_[2];

		# Contact the SMTP server
		my $smtp = Net::SMTP->new($smtp_server);
		if ( "$smtp" eq "" ) {
			my $error = "\n$script ERROR: Unable to connect to SMTP server: $smtp_server\n";
			$session->OutputDebugString($error);
			die $error;
		}
		$session->OutputDebugString("Contacted SMTP server: $smtp_server.\n");


		# Build the email and send it.
		$smtp->mail($from);
		foreach $to (@addressees) {
			$session->OutputDebugString("Sending email to: $to\n");
			$smtp->to($to);
		}
		$smtp->data();
		$smtp->datasend("To: @addressees \n");
		$smtp->datasend("Subject: $subject\n");
		$smtp->datasend("Content-Type: text/plain;\n\n");
		$smtp->datasend("\n$body\n");
		$smtp->dataend();
		$smtp->quit;

		$session->OutputDebugString("Returning from $script\n");
		return;
	}
Table of Contents



Uninstall CQ on Windows.
Updated: 04/06/11
Version: 7.0.1
CQ doesn't come with an "uninstall" executable. You need to use the Windows Add/Remove programs utility. While that will effectively remove it from the machine, there may be residual registry entries. To perform a safe and clean removal, follow the steps in http://www-01.ibm.com/support/docview.wss?uid=swg21193899
You can download rationaluninstalltool.exe from IBM that will uninstall all Rational products on the current machine. It uninstalls the MSI-based versions of the software (pre-7.1) in prep for installing the Eclipse-based version.

Table of Contents





CQ release area siteprep
Updated: 04/06/11
Version: 7.0.1
When creating a centralized release area from which users can do their Client installs, it's usefull to provide them with a predetermined set of configuration parameters. Once the release area has been created, execute siteprep.exe in top level of that directory.
Note: Most of all the questions answered will be generic to all users. However, if you click on Enable Email Notification, while you can provide the users with a centralized SMTP server name, the dialog also requires you to fill in a "from" email address. Unfortunately, anything you put there will be used by all users who have used this install. Moreover, you can't leave it blank because the dialog won't allow it. To get around it, simply type a space into that box and then continue.
When users perform an install, they can still override the default settings, but at least have the configuration questions answered already, with the exception of the "from" email address (see note above). The user will still need to set their own email address after the install is complete.
If you run siteprep, the information will be stored in a file with a .dat extension. The system will automatically create a corresponding installation executable that uses your custom settings instead of the default settings. For example, if you named your siteprep settings "CQ_701_projectX", the system will store the configuration in a file called "CQ_701_projectX.dat" and will also create an installion link called "CQ_701_projectX" that you run instead of the generic "setup.exe".

Table of Contents





Customize the AuditTrail package
Updated: 04/07/11
Version: 7.0.1
The history functionality only captures the date, user, action, old state and new state. Most projects want to see "what" was changed. To get more detail, the AuditTrail package can be applied to a record type. While the AuditTrail package already provides out-of-the-box functionality for capturing change details, the package allows customizations.
Even if the AuditTrail package is applied to a record type, the history will still be recorded automatically. The history is still useful for reports that look at trending metrics, which would be difficult to parse from an AuditTrailLog record. Besides, there currently is no way to turn off the history.
As a side note, the numbers that appear in parenthesis next to each field name in the audit trail log are the old and new field lengths. If it's the Submit action, only the new field length is displayed.
Customize the format:
The audit trail code is run from a BASE action hook called "at_Base". During validation the code ensures an AuditTrailLog record exists for the parent record and is linked to the parent. During commit a read-only global script called "at_CreateChangeEntry" is called to create the actual log entry. However, if the system finds a global script explicitly called "atCust_CreateLogEntry", it will use that script instead to format the log entry. To customize the format, create a global script called "atCust_CreateLogEntry", copy the contents of "at_CreateChangeEntry" over to it, and then make changes as desired.
Exclude fields from the log:
The "at_CreateChangeEntry" global script calls another global script called "at_IsExcludedField", if it exists. To exclude fields from the audit trail, create a global script called "at_IsExcludedField". The entity (object) and field name (string) are passed to that custom script. Your custom script should return a "1" if the field is to be excluded or a "0" if the field is the be included. If all fields wind up being excluded (perhaps for a given state, such as Submit), the package script will still create an audit trail log entry header, just without any field change information. There isn't any way to completely turn off the log entry for a given state/recordid/recordtype etc. Note that because "at_CreateChangeEntry" uses the GetFieldsUpdatedThisAction API call, fields changed during an action's initialization hook are already not included.

Table of Contents





Set up pessimistic record locking.
Updated: 05/12/11
New in CQ verion 7.1 feature level 7, administrators can set up pessimistic record locking; ensure only one user can edit a record at a time.
The optimistic locking model allows multiple users to view and attempt to modify a record at the same time, but prevents all but the first user from committing their changes. Users are not informed that others are also attempting to update the record until they click Apply.
The pessimistic locking model enforces sequential modification of records, which prevents the simultaneous updates of records. As soon as one user starts to update the record, this model places a lock on the record. Any other users that attempt to start to update this record are informed that another user has an update in progress and are locked out from modifying it.
This model requires a lock management strategy which addresses:
- Getting the lock.
- Informing any users they must wait for the lock to be released.
- Informing users that the lock has been released.
- Freeing locks that have been abandoned (such as system crashes).
To use pessimistic record locking, hook code must be added to the record types that want to use it. The hook code must be added as a new BASE type action for each record type. Manually removing locks can be accomplished with hook code that is implemented as a record script alias. You can use a ClearQuest query to find locked records by searching for records with the locked_by field equal to non-null values. The locked_by user database column is an integer column that records the user's login id when a record is locked.
See the following link for details schema modifications and lock maintenance tips under Administering -> Administering Rational ClearQuest.
https://publib.boulder.ibm.com/infocenter/cqhelp/v7r1m0/index.jsp?topic=/com.ibm.rational.clearquest.relnotes.doc/topics/c_cq_relnotes.htm
CQ comes with a script to find locked records: CQ-home\findrecordlocks.pl. See findrecordlocks

Table of Contents





UNC paths.
Updated: 04/29/11
UNC paths, such as \\computer\share\file are useful for creating references to external files. These paths work fine in the fat client, but may or may not work in the web interface. Independent of the interface type, they cannot contain any spaces or other special characters. Special characters need to be encoded to make a solid string, such as %20 for spaces: \\computer\share\file%20with%20spaces.pl.
Officially, UNC paths are not supported in the web: https://www-304.ibm.com/support/docview.wss?rcss=su&uid=swg21407760 Users must copy and paste the string into their Windows explorer. However, I've seen it work consistently in CQ 7.0.x, but then not work at all in CQ 7.1.x.

Table of Contents





Dynamically create an HTML page.
Updated: 05/26/11
It's possible to dynamically generate and display data in HTML format.
push(@page,"<HTML>\n");
push(@page,"<HEAD><TITLE>Some Data</TITLE></HEAD>\n");
push(@page,"<BODY><FONT FACE='Arial' SIZE='-1'>\n");
push(@page,"<TABLE BORDER='1' CELLSPACING='0' CELLPADDING='0'>\n");
push(@page,"<TR><TH>Login</TH><TH>Fullname</TH><TH>Phone</TH>\n");

foreach $login (@logins) {
	...
	push(@page,sprintf("<TR><TD>%s</TD><TD>%s</TD><TD>%s</TD>\n",$login,$fullname,$phone);
}
push(@page,"</TABLE>\n");
push(@page,"</BODY>\n");
push(@page,"</HTML>\n");

$html_file = $ENV{"TEMP"} . "/user_information.html";
open(OUTFILE,"> $html_file");
print OUTFILE @page;
close OUTFILE;

# The sleep is necessary to give the browser time to launch and display
# the temporary file before it gets deleted.
system("$html_file");
sleep 10;
unlink("$html_file");
Table of Contents



Open Services for Lifecycle Collaboration (OSLC).
Updated: 05/01/12
Records in different CQ databases can be linked via OSLC. See: http://www-01.ibm.com/support/docview.wss?uid=swg21433074
The links can be clicked to log into the remote db and view the record, but cannot be traversed programmatically, such as in a query.

Table of Contents





Programmatically create an Excel spreadsheet using Perl.
Updated: 05/29/12
Version: 7.0.1.8
Information gathered from CQ can be formatted for easy readability by placing the output in an Excel spreadsheet. The following example works with CQ perl.
Many more things can be done than just the bare-bones example below. See http://www.perlmonks.org/?node_id=153486 for useful information.
	use Win32::OLE qw(in with);
	use Win32::OLE::Const "Microsoft Excel";
	use Win32::OLE::Variant;
	use Win32::OLE::NLS qw(:LOCALE :DATE);

	$Win32::OLE::Warn = 3; # Die on Errors.

	$excelfile = ".\\myfile.xlsx";

	$Excel = Win32::OLE->new('Excel.Application');
	if ( Win32::OLE->LastError ) {
		print "Unable to open an Excel spreadsheet.\n".Win32::OLE->LastError;
		exit 1;
	}
	$Excel->{DisplayAlerts}=0;

	$Book = $Excel->Workbooks->Add();
	if ( Win32::OLE->LastError ) {
		print "Unable to add workbook.\n".Win32::OLE->LastError;
		exit 1;
	}

	$Sheet = $Book->Worksheets("Sheet1");
	$Sheet->Activate();       
	$Sheet->{Name} = "Stale ClearQuest records in CQ";
	if ( Win32::OLE->LastError ) {
		print "Unable to activate a sheet.\n".Win32::OLE->LastError;
		exit 1;
	}

	$Sheet->Range("a1")->{Value} = "My text";   

	$Book->SaveAs($excelfile);
	$Book = $Excel->Workbooks->Close();
Table of Contents



Programmatically create a Word doc using Perl.
Updated: 06/04/12
Version: 7.0.1.8
Information gathered from CQ can be formatted for easy readability by placing the output in a Word doc. The following examples work with CQ perl.
Many more things can be done than just the bare-bones example below. See http://www.adp-gmbh.ch/perl/word.html for useful information.
For an unknown reason, attempting to start a Word throws an error if not using strict.

	# Create and save a new Word doc.

	use strict;
	use File::Spec;
	use Win32::OLE;

	my $word = CreateObject Win32::OLE("Word.Application");
	#$word->{'Visible'} = 1;   # Use this if you want to display the Word doc.

	my $document	= $word->Documents->Add;
	my $selection	= $word->Selection;

	$selection -> {'Style'} = "Heading 1";
	$selection -> TypeText("Some header");
	$selection -> TypeParagraph;

	$selection ->{"Style"} = "No Spacing";
	$selection -> Font->{Bold} = 1;
	$selection -> TypeText("This is bolded.\n");
	$selection -> Font->{Bold} = 0;

	$selection -> Font->{ColorIndex} = 2;
	$selection -> TypeText("This is blue.\n");
	$selection -> Font->{ColorIndex} = 1;

	$selection -> TypeParagraph;

	my $script	= (split(/\\/,$0))[-1];
	my $path	= File::Spec->rel2abs($0);
	my $script_path;
	($script_path	= $path) =~ s/\\$script//;

	$document->SaveAs("$script_path\\word.doc");

	exit 0;


1 Black 
2 Blue 
3 Turquoise 
4 BrightGreen 
5 Pink 
6 Red 
7 Yellow 
8 White 
9 DarkBlue 
10 Teal 
11 Green 
12 Violet 
13 DarkRed 
14 DarkYellow 
15 Gray50 
16 Gray25

	# Open an existing word doc and flatten it to a text file.
	use Win32::OLE::Const qw(Microsoft.Word);

	$word_o		= CreateObject Win32::OLE("Word.Application");
	$document_o	= $word_o->Documents->Open("$wordfile");
	$word_o->ActiveDocument->SaveAs({FileName => "$wordfile.txt", FileFormat => wdFormatTextLineBreaks});
	$word_o->Quit;
Table of Contents



Pass a $session variable to a different Perl script.
Updated: 01/17/13
Version: 7.1.2
Perl scripts can be written to run very fast. However, if a CQ login is required, that can be the slowest part of the program. If a script has already created a CQ session, it's possible to share that object with another Perl script.
	$return = do("$path\\other_script.pl");

	# Check for a problem calling the other script.
	if ( ! defined($return) ) {

		if ( "$@" ne "" && "$@" !~ /_TK_EXIT_\(0\)/ ) {
			print "There was a problem compiling the child:\n$@\n";
			exit 1;
		}

		if ( "$!" ne "" )	{
			print "There was a problem executing the child:\n$!\n";
			exit 1;
		}
	}
Notes:
1) When using the "do" command, both Perl scripts share the same variable space. For that reason, it's important to "use strict" in both scripts to avoid issues.
2) Any variable you want shared between the two scripts needs to have a global scope of "our", as in "our $session = ...".
3) You can't pass CLI arguments using the "do" command. That is, "do 'script.pl $var'" doesn't work. All information has to be passed by setting global variables.
4) Don't do a normal exit from the called script, as it will exit the parent as well. That is, only call "exit" in the called script if there is a problem there.
5) The called script cannot return anything like a subroutine might. The "return" above is from "do" and is undef if there was a problem.
6) The check for "_TK_EXIT_(0)" above is there because calls to Perl TK functions set the $@ variable even on success "(0)". If the TK call failed, it would be set to "_TK_EXIT_(1)".

Table of Contents





ClearQuest and Designer DOs and DON'Ts.
Updated: 08/21/18
Version: 9.0.1.3

The following are bits of wisdom not necessarily documented in any IBM Rational doc/website, but derived from years of experience.

Performance
See the section on performance..

Stateless record unique keys
1) A unique key can be defined by multiple fields. The fields that make up the unique key are in the order in which they were added to the record type. Conversely, when viewing the Unique Key of a record type in the Designer, the fields are listed in alphabetical order. In the future, it will be intederminate in which order they were added to the record type, so the field order is unknown without doing further investigation. For that reason, fields that are going to be utilized in the unique key should be added to the record type in alphabetical order.
2) If a unique key is made up of multiple fields, the values that make up the unique key are space-separated. For that reason, avoid using field values that have embedded spaces. If you do, there will be no way to programmatically determine what the key parts are if given a unique key in a script.
3) However, you should try to avoid using multiple fields in the unique key if at all possible. The reason is that if you ever want to import the records into another database, there is no way to select multiple unique key fields when updating existing records in the destination database. Yes, you could just import all the records and let it generate an error for those that would create records with duplicate unique keys. But, if the true unique is the dbid, it will import all new records. If you need to have multiple fields in the unique, it's highly recommended that you create a field called something like "unique_key" and space-concatenate the values there. Doing that will make life a lot easier later on. In fact, you should add the "unique_key" field to all record types to be consistent, even if the unique key is a single field.
4) In a record export file, multiple references in a reference list field are comma-separated. If the unique key has a field value with embedded commas, the system will be unable to interpret it correctly and therefore be unable to import those stateless records into another database. Avoid using unique key field values with embedded commas.
5) Don't use the "dbid" field as the unique key. It may be tempting to use it, as it's guaranteed to be unqiue. But, there is no way to display it in queries in the CQ GUIs and it isn't selectable during exports and imports of records, which means it can't be used to reference/update existing records in a destination database.
6) Whenever possible, avoid using a reference field as part of the unique key. This is problematic during imports and updates in other databases. The unique key is then the unique key of a different record type, which can change during import, thus altering or potentially confusing the uniqueness of the unique key. In fact, it will be impossible to import updates. The system will produce an error that a record with that unique key already exists, and you have no way of telling the system to use the reference field because it doesn't show up as choice in the update record field list.
7) If at all possible, don't leave any of the unique key fields blank. Always ensure there is a value in those fields. It's possible to create a record with one of the unique key fields blank and the record can be modified in the GUI, but it will be impossible to correctly reference the record using GetEntity in a script.
NOTE: If you must have multiple fields to make a record unique, the safest, easiest, and most programmatically accessible thing to do is to concatenate all the values into a single string field and make that single field the unique key instead. For example, if a given record is defined by Project=xyzproj and Team=myteam, put both of those values into a hidden field called something like "record_key" or "unique_key" and give it the value "xyzproj myteam". Trust me, this will save a ton of programmatic headaches later on. Also, note that if a record has multiple fields making up its unique key, there is no programmatic way to determine in which order they are to be listed. They are not necessarily listed in alphabetical order. They are listed in the order in which they were added to the record type. Concatenating several values into a single field solves that problem too.

Reference fields
If two record types need to refer to each other, use the built-in "back reference" functionality. DO NOT create a manual reference to the other record in both record types. Doing so will create a circular reference. Circular references cause performance issues and make programmatically dealing with records very difficult (painful).

Back reference field naming
When creating a reference to another record type you can have the system automatically create a back reference to current record type. You are allowed to name that back reference. For clarity and ease of identification, consider giving all back reference fields names like "Back_Ref_other-record-type", such as Back_Ref_Change_Request. Apply standard naming conventions like that makes it much easier later on when dealing with the record types programmatically.

Error messages
Don't have identical error messages anywhere in the schema. Even if you need to convey the same information, make the message unique for each place it occurs in the schema. If a user encounters the error and you need to troubleshoot the cause, it makes it very difficult to determine which of the instances of the error message they are actually encountering. If all error messages are unique in some way, then you can get right to the very spot where the error occurred.

Record and global script naming
Do not give record and global scripts generic names, like "Validation". If you ever try to search the schema for instances of that script, it's returns waaay too much information, which makes it difficult to find what you want. Give it a unique name, like "Deliverable_Validation".

Old ID fields
If you ever need to import stateful records into another database, you'll need an "original ID" field to hold the original/source database's ID for that record. If during import you fill that field with the original ID, you can re-import the same records with updates without creating new ones. So, for every stateful record, always create a hidden text field called something like "old_id" or "orig_id". You never know when you'll need it.

Scripts
Put oft-used code in subroutines. However, if the subroutine isn't used by other record types, put it in the "Record Scripts" location of the current record type instead of "Global Scripts". Doing so avoids loading unnecessary routines for other record types. However, don't go crazy with subroutines that call subroutines that call subroutines. Centralization of code is desirable ... up to the point where it's difficult for anyone else to follow the code to find out what it's doing, especially if you don't leave behind a good architecture document ;-)
Don't name a global or record script that is a substring of another script name. For example, if you have a global script called "Verify_Resource_Roles", don't then create another script called "Verify_Resource_Role". The problem is if you have a large number of global scripts and the "Verify_Resource_Roles" script is higher up in the code, if click on the global script name "Verify_Resource_Role" do Find In Hooks, CQ will actually stop searching when if finds the script called "Verify_Resource_Roles" because it successfully matched the string it was looking for. Unfortunately, that's not the script you wanted to see. In short, give all your global and record scripts very distinct names.

Admin access
It's standard to build restrictions into a schema such that only certain groups can modify certain fields in certain states. But, because there are times when data gets out of sync, it's a very good idea to have an administrative "back door". That is, allow all custom fields to be modifiable in all states by somebody in an administrative group.

External changes
WARNING: Be VERY careful about modifying other records as part of the Commit action of a record. The problem is that records can be updated in batch mode. That is, multiple records can be updated at the same time. The edits that went into the first record in chosen set of records will be replayed in all the other records. Unfortunately, if your secondary processing does something like create a new record and link it to the current one based on the current record's configuration, that same secondary record will get added to all the other records too, which probably is not the intended result. So, because records can be updated in batch mode, be careful about making modifcations outside of the current record.

Lists
Avoid using a record type where the records hold a single value, perhaps to be used in a list. For performance reasons, there are better ways of working with lists. The following is from best performance to worst:
- CONSTANT_LIST: Best performance, but only works for a single field on a single record type, and is difficult to update.
- Dynamic (named) list: Works across multiple fields on multiple record types.
- Hard-coded list: Only works for a single field on a single record type, plus is difficult to update.
- Record type: Dedicate a record type where each record is a value for the list.
- External list: Worst performance. If the list is to be maintained outside of CQ, the external file can be accessed directly, or perhaps updated in CQ periodically using a scheduled job.
If it's a simple list of values that isn't too large, the best bet is a dynamic/named list.

Batch Updates
For safety, DO NOT update records in batch mode in the client.
Updating multiple records in batch mode in the client can often have unintended consequences, such as secondary updates that erroneously get applied to all subsequent records.
If you need to backfill, say, a field in many, many records, write a script to update each one individually in a loop. There are several benefits to doing it in a scripted loop:
1) The edits made to one record do not carry over to subsequent records, even if there are secondary changes.
2) If a validation fails, you can log the record's ID and go to the next record, whereas the batch update will halt and wait for you to acknowledge the validation issue.
2) Before committing the record you can ask CQ what fields were changed this edit. If there are any fields that you weren't expecting to be updated, you can revert the edit, log the ID, and move on to the next record.

Note that if you want to prevent users from doing batch updates as well, see: Determine if a record is being modified inside batch update.

Querying
1) If you need a field value from a specific, it's harder to code, but you get better performance if you right a query that selects that specific record and returns that field value.
2) Don't write SQL queries into the schema unless you absolutely have to. The query syntax is sometimes dependent on database version and/or vendor. If you ever change the underlying user database, you may have to update and regression test the schema in many places. Write your queries using the CQ API, which is independent of the database vendor.

Schema versions
When saving a schema version or importing a version into a different repository, always include the date in the comment. The list of schema versions in the Designer doesn't indicate when a version was created. That information is helpful if you're ever researching when a change was introduced.

Config files for configurable parameters
When creating a record type that includes business rules that govern behavior, configurable items should be placed into a configuration record type and not hard-coded into the schema. For example, if there is a business rule that says a parent record can have no more than 5 children, otherwise it must be split into separate projects/efforts for implementation, it would be best to put that value into a configuration file. For example, if the record is ChangeRequest, create a ChangeRequest_Config record type. Put all configurable items as parameters in that record type. That is, the config record would say MaxDefectChildren = 5. If the business changes its mind, just update the config file and no extra work is needed. Having config files for configurable parameters in the system saves a ton of time later on. Parameterize as many things as is practical.
You can also create a generic "Configuration" stateless record type with a set of generic fields of each type, like a few short string fields, a few date-time fields, a few multiline text fields, etc... and add one field that is a Title/name for the configuration record to make it unique and labeled properly. Also include a Description field so that future admins (after you've moved on to a new project) can understand the purpose of that configuration.
Here are just a few examples of the types of things that you should not hard-code into the schema.
- team names
- SLA dates
- document names
- team email addresses
- etc...

Email audit records
When a system gets large and complex, there are often many, many emails sent out for various reasons. For troublshooting and record keeping, it's recommended that you create a Email_Audit (or similar named) record type that records every email sent out, to whom, subject, body, date/time, what email rule trigger that email, etc...

Error trapping
All errors from subroutines should be trapped and sent back to the calling routine and then presented to the user. Always include more than enough information to determine exactly what the user was trying to do at the time, what information was needed, but is missing, etc...
If the system is larage and complex enough, you could have the system automatically generate an Error_Log (or other name) record that records a ton of detailed information that can then be examined to determined what parameters were in place when the error occurred. This saves a ton of time trying to figure what the user was attempting to do and what configurations were in effect when the error occurred.

Field naming
Avoid giving fields generic names like "email" or "Assigned_To". When searching a very large schema, it becomes very difficult to isolate the field you're looking for. Instead, give them unique names across the whole schema. For example, if the record type is "Team", give the fields names like "Team_Email", "Team_Name", "Team_Manager", etc...

Date created
It's very common to search for records based on when they were created. The history.action_timestamp will give you that information for any record. However, when the number of records in a given table gets very large, querying on the history values can be very slow. For that reason, you should always add a DATE_TIME field to every record type called something like "Date_Created" and put the creation date-time in there when the record is generated. Querying on that field instead will be much faster when the history table becomes very large. Consider other fields like Date_Completed, etc... as well.

Date fields (in general)
The DATE_TIME field type in CQ holds the date and time. If when setting the field value, you're only interested in setting the date, be aware that CQ will then automatically set the time stamp to midnight of that same day. Now, if you view that date field in a timezone that is the same as the server where the record was created, the date field will show the correct date. However, if you're in a later timezone, CQ will automatically adjust the time by a number of hours and the date will appear to be the previous day. That becomes a problem if, say, the field is supposed to show the "date" a give action item is due. The date will be misleading to the user. For that reason, if you're only intestered in the "date" portion of a DATE_TIME field, either make the field a SHORT_STRING, or explicitly set the time to be noon for that given day. If set to noon, even if the "time" portion gets auto-adjusted for a given timezone, the displayed date will remain the same.

Stateless record status
All stateless record types should be given a "Status" field, where the status of the record might be Enabled/Disabled, or perhaps Active/Inactive, etc... Queries that look for stateless records should filter out the inactive ones.

Audit Trail / history
It's a good idea to implement the Audit Trail feature on all record type and always display the audit trail and history fields on the form. This is especially useful when working on a team with multiple administrators with different levels of access to make changes to records. It's also very useful to know who changed what and when, especially if something breaks after the change.

Table of Contents





Remove/scrub history records.
Updated: 03/08/18
Version: 8.0.1.14

The following command can be used to scrub history records. See http://www-01.ibm.com/support/docview.wss?uid=swg21689388 for details.

WARNING: The documentation says that if you're database is below feature level 9, it will leave the latest history entry in place; the one that doesn't have an expired_timestamp entry. However, if you use the -action option, it will remove the latest history entry if it matches that action. If that happens and you are below feature level 9, the record will no longer be editable. The system will be unable to add a date to the previous history record's expired_timestamp field if there is a value there already, which prevents the record from being saved.

	installutil scrubhistory dbset login password db record_type -before date -verbose

	Ex:
	installutil scrubhistory PRODCQ admin 1234 TKTS Error_Log -before 2012-03-31 -verbose
Table of Contents



Tell ClearQuest to use a different database driver.
Updated: 04/02/18
Version: 8.0.1.14

The database drivers that come bundled with CQ may not function correctly, perhaps because of an issue on the database side. If that ever occurs, you can tell CQ to utilize a different ODBC driver.

	# If using the default port number.
	installutil registerconnectoptions DB2 "USE_DB2_CLIENT_DRIVER"

	# If your architecture uses custom port number.
	installutil registerconnectoptions DB2 "USE_DB2_CLIENT_DRIVER;PORT=port"

	# Turn off using other ODBC drive and go back to using the bundled one.
	installutil registerconnectoptions DB2 ""
WARNING: There is a possible downside to utilizing that installutil command. Some databases are contacted using custom port numbers and the port number can be different for different databases. What that implies then is that when you execute that installutil command, CQ on that computer will only be able to connect to databases that are associated with that port number. There is no way to specify multiple port numbers in that command. That is, you would be able to use the Designer, Client, and User Admin tools for databases using that port number, but not others … unless you first run the installutil command again for a different port number. If your schema repository database uses a different port number than the user databases associated with it, there's no way to make it work.

Table of Contents





Database views.
Updated: 04/18/18
Version: 8.0.1.14

The following discussion used DB2.
A database view can be created by a DBA to provide a team with a read-only query directly in SQL. Logging into the database directly is much faster than logging into the CQ API. The DBA will create a view with a query defined by the CQ Admin. The DBA will create a database read-only user who is only allowed to run a specified set of views (queries). The other team can then programmatically log into the database and run a view to retrieve the data selected by the query.
	# Retrieve the list of views in a db.
	SELECT viewname FROM syscat.views;

	# Inspect the query associated with a view.
	# If running in pdsql, the "text" output will be truncated.
	SELECT text FROM syscat.views WHERE viewname = 'view';
Table of Contents



Delete a dynamic/named list.
Updated: 08/15/18
Version: 9.0.1.3

View the Workspace. At the bottom, under "Dynamic List Names", right-click on the list to be removed and select "Delete".

Table of Contents





Install a package into a record type.
Updated: 02/23/06
The following assumes that the package is already installed in the schema. See Install a package into a schema.
Just because a package is available to a schema, doesn't mean that it will be in each of the record types. In the Designer, while the schema is open for edit, right-click on the record type and select Setup Record Type for Packages. Select the packages to be added to the record type and click OK. The schema must be freshly opened for edit. That is, there cannot be any other changes to the schema for the current checkout.
WARNING: Once a package is installed, according to Rational, there is no way to remove it.

Table of Contents





Install a package into a schema.
Pre-defined functionality can be added to a schema via packages. With a schema checked out, Package -> Package Wizard... and simply choose the name of the package to be added. If, for instance, one wants to integrate ClearCase with CQ without UCM, see the More Packages... button; known as installing a package into the schema repository. CQ installs the common ones into the repository for you as part of the install.
However, if the package you seek is not listed in "More Packages", it may need to be installed in the schema repository database. Note that even though you've run packageutil, some packages, such as EnterpriseSetup, do not show up anywhere in the Package Wizard. This is normal. The packageutil command lives in cq-home. All the packages that can be installed live in cq-home\packages.
  # packageutil installintoschemarepo [-dbset dbset-name] admin-login admin-password package-name package-version
If the current schema has been modified during this checkout, it must be checked back in prior to adding the package.

See also Install a package into a record type.

Table of Contents





Remove a package.
Updated: 02/23/06
According to Rational, there is no way to remove a package. Moreover, there is no way to (legally) remove the fields or any functionality installed by a package. See editing packages.

Table of Contents





Determine schema package and upgrade level.
Version: 7.0.1.8
Updated: 05/18/12
To simply view what versions of what packages are installed in a particular schema, in the Designer select View->Schema Summary.
New in CQ 2002, the package_upgrade_info.bat utility is obsolete. To upgrade a schema to the latest packages, enter the Designer but don't checkout the schema to be upgraded. In the Package menu, select Upgrade Installed Packages. It will prompt for the schema to upgrade. Upgrading a schema to the latest packages does not upgrade user databases. After the upgrade, the schema is left in a checkedout state for the Admin to inspect and test prior to checking it back in.

Table of Contents





Edit schema packages.

WARNING! The following is strictly, vehemently NOT SUPPORTED by Rational Software. If the customization performed on a Rational package breaks the system, you're on your own. The following command does not necessarily work for all packages. Your changes will most likely be overwitten if you upgrade CQ. Make sure that the admin user performing the edit on the package is the ONLY user logged into the schema repository.

From the CLI:
  # packageutil enableediting -dbset "dbset" CQ-admin admin-password -enable CQ-admin
Log into CQ designer as the CQ-admin and checkout the desired schema. Perform the modification, such as changing the Help Text that is associated with a package's field. Test the change and checkin the schema.
  # packageutil enableediting -dbset "dbset" CQ-admin admin-password -disable CQ-admin
Table of Contents



Register a custom package.
Updated: 07/24/17
CQ version: 8.0.1.14
Custom packages can be created. Under normal circumstances, simply placing the package into the "...\ClearQuest\packages" directory is enough for CQ to recognize it. However, if not, run the following from the CLI. Note curiously that the admin password isn't required.
  # packageutil registerpackage package-name package-rev package-path

Ex:
  # packageutil registerpackage CustomPackage 1.1 "C:\Program Files\Rational\ClearQuest\packages\CustomPackage\1.1"
NOTE: Package registrations are recorded in "HKEY_LOCAL_MACHINE\Software\Rational Software\ClearQuest Packages" on older machines and in "HKEY_LOCAL_MACHINE\Software\Wow6432Node\Rational Software\ClearQuest Packages" on newer 64-bit machines.
NOTE: It isn't recommended that custom packages be stored under the CQ install directory, as they will get deleted if there is ever a reinstall.

Table of Contents





Remove a package from a record type.
Updated: 11/15/19
CQ version: 9.0.1.04
Once a package has been applied to a record type, there are very limited ways to remove it. That is, you can't just uninstall the package.

Hide the Package functionality
If the package was applied a while ago, and there are many schema versions in between the time it was applied and now, it is possible to leave the package intact and just remove the tabs that the package added to the forms. In the future, should you decide you need this package, you could just add the tabs back and the corresponding fields back on the forms.

Roll back the schema revisions
If a package was added by mistake, the only way to remove it would be to delete all schema versions back to before the package was added. However this procedure cannot be done if any databases are using these newer schema revisions. Note: If the only database using the new schema version is a test database, then you can delete the test database. At this point, you can continue with rolling back the schema versions.

Restore from back-up
If the production database has already been upgraded with this package, the databases would need to be deleted. Then the schema where this package was installed could be deleted. Alternatively, the Schema Repository and all associated databases could be restored from a back-up from before the package was applied to the schema.

Table of Contents





Project Tracker overview.
CQ Project Tracker allows you to exchange information between the Microsoft Project 2000 (or later) project management application and CQ. It helps you create a “closed loop” project tracking system that takes advantage of the project management features of Microsoft Project and the change management capabilities of CQ. With Project Tracker, the project manager has access to information in CQ about work being performed by individual contributors. The manager can feed this information into Microsoft Project and update the project plan with the most up-to-date information available.

Table of Contents





Run a query.
In the CQ client, simply double-click on the query's name in the Workspace pane. Alternatively, you can highlight the query's name in the Workspace pane and select the menu option Query -> Run. The "Run Query" button merely reruns the currently selected query.

Table of Contents





Create a new query.
1) In the CQ client, launch the Query Wizard via Query -> New Query.
2) Select the Record Type on which the query will be run; most likely Defect.
3) Since this is a new query, just click Next.
4) Select the fields to be displayed when the query is run by double-clicking on them in the order you want them to appear. Click Next when done. If you want to remove a column from the query output, right-click on to Delete.
5) In turn, highlight each Filters field and decide on the criteria by which that field will be filtered. Click Run when satisfied. The output can be arranged in any depth of "and"s and "or"s.
If you want to save the query, select File -> Save As... The query is saved in the Personal Queries folder of the user that created it. The "admin" user has the additional ability to add the query to the Public Queries folder. As admin, simply drag the query name to that folder to make a public copy of it.

WARNING: In CQ prior to 2000, when saving a query for the first time, it's ok to say Save As. If editing a previously saved query, only use the Save option. If you select Save As to save an existing query, it correctly prompts you that it is about to overwrite the old query and then incorrectly crashes the CQ client.

Table of Contents





Run a chart.
In the CQ client, simply double-click on the chart's name in the Workspace pane. Clicking on the "Run Chart" button merely reruns the current chart. Use the Chart menu options to alter the look of the current chart. Alternatively, you can right-click in the chart to gain the same menu options. Right-click in the white area around the chart to access the chart's Properties. Charts can be saved as an image (jpg,bmp,etc...) via File -> Export Chart...

Table of Contents





Create a new chart.
1) In the CQ client, select Query -> New Chart...
2) Choose a record type ... (usually Defect) and select OK.
3) Specify a chart type. Leave the Run Query box checked if you want to run the chart immediately following creation. Click on Next.
4) Specify which fields to place on the axes. The vertical axis must contain a field that can be counted up. The columns that will be displayed in the histogram can be subdivided by fields chosen/selected in the Legends boxes. For example, instead of showing all bugs lumped together, sort them by Project. In this example, you would select Project in the Legends box. Select Next when satisfied. These parameters can be easily modified on the fly after the query is run and before the final chart is saved.
5) Give the chart some pnemonic Labels. Click Next.
6) Select display type and click Next.
7) Select chart style and click Finish. De-select the Color option if printing the final chart to a black and white printer.
If you want to save the chart, select File -> Save As... The chart is saved in the Personal Queries folder of the user that created it. The "admin" user has the additional ability to add the chart to the Public Queries folder. As admin, simply drag the chart name to that folder to make a public copy of it. Be careful to drag it to the correct subfolder; Aging Charts, Distribution Charts or Trend Charts.

WARNING: When saving a chart for the first time, it's ok to say Save As. If editing a previously saved chart, only use the Save option. If you select Save As to save an existing chart, it correctly prompts you that it is about to overwrite the old one and then incorrectly crashes the CQ client.

Table of Contents





Run a report.
In the CQ client, simply double-click on the report's name in the Workspace pane. Clicking on the "Run Report" button merely reruns the current report. Reports come with a set of VCR buttons at the top of the Query Results pane to step through the report. The magnification can be customized in that tool bar. In the same tool bar, click on the button/icon that looks like an envelope to export the report to another program.
CQ comes bundled with Seagate Crystal Reports version 6. This can be used to generate advanced reports. However, Crystal Reports does not handle parent/child relationships.

Table of Contents





Create a new report using Seagate Crystal Reports.
1) In the CQ client, select the menu option Query -> New Report... and choose a record type (most likely Defect). Click on OK.
2) Choose a Report Format via Browse... -> Public Queries -> Report Formats. Then, choose which previously defined Query to Apply to the report. Browse and select the appropriate query, then click OK. CQ will immediately run the report.
If you want to save the report, select File -> Save As... The report is saved in the Personal Queries folder of the user that created it. The "admin" user has the additional ability to add the report to the Public Queries folder. As admin, simply drag the report name to that folder to make a public copy of it.

WARNING: When saving a report for the first time, it's ok to say Save As. If editing a previously saved report, only use the Save option. If you select Save As to save an existing report, it correctly prompts you that it is about to overwrite the old one and then incorrectly crashes the CQ client.

If there is no report format for what you would like see reported, simply create one. The following assumes you have installed Seagate Crystal Reports along with CQ. Crystal Reports comes bundled with CQ on a separate cdrom. The following steps were done with Crystal Reports version 6 and demonstrate a very basic report format.
a) Select Query -> New Report Format... and choose a record type (most likely Defect).
b) Give it a unique Report Format Name.
c) Add appropriate fields to the right-hand window by double-clicking on them.
d) Click the Author Report... button.
e) Select Insert -> Database Field...
f) Drag and drop each field into the Details pane.
g) Select File -> Save and then File -> Exit. DO NOT use "Save As" or CQ will not be able to find the report format.
h) Click the Ok button and commit the report format to the database. This report format is now available to be used with Query -> New Report...

Table of Contents





Edit an existing chart's parameters.
Run the existing chart in the CQ Client. Select Edit->Properties...

Table of Contents





Create a report using Rational SoDA for Word.
This isn't supported in CQ Web or CQ UNIX. It requires that Rational SoDA be installed. It's bundled with installations such as DevelopmentStudio but not with the stand-alone CQ install.
CQ comes with a couple predefined report formats. To invoke one, simply select Query->Generate SoDA Report in the CQ Client.
To define your own template (format), go to Start->Programs->Rational Software-> Rational SoDA for Word. You'll notice that the Word doc that comes up has a SoDA menu listed at the top.
1) In the CQ Client, create and save in the Public folder a query that gathers the information you want reported. It's not mandatory to create a query first, but makes the step (4) easier.
2) Select SoDA->Template View. A new window called "SoDA Template View" will appear.
3) In the right-hand pane called "Select domain class", click on "ClearQuest CQDatabase". Fill in the DatabaseSet (most likely someting like 2003.06.00) and the DatabaseLogicalName (the five-character user db name). Log in as a user with administrative privileges. To extend the domains listed, see "Extend SoDA source domains".
4) Click on the query you created in step (1). Note: If you run a SoDA report with an existing query that includes a dynamic filter prompt, you need to include the prompt command in the SoDA template to ensure that SoDA displays a dynamic filter prompt in the CQ query. Once the query is in the left-hand pane, click on the fields to display. You can, however, not use a pre-defined query and simply build up the REPEAT and DISPLAY commands yourself.
5) Once you are satisfied that the proper information will be reported, right-click on the OPEN command and choose Modify. Blank-out the values for DatabaseSet and DatabaseLogicalName. Those values will be automatically picked up when the report is run from within in CQ.
6) Close the SoDA Template View window.
7) In the Word doc, go to File->Properties and change the Title. This name will be the one that appears to users when running a report in CQ.
8) Close the Word doc. Do a Save As to the template directory, namely "C:\Program Files\Rational\SoDAWord\Template\ClearQuest".
Now, when you select Query->Generate SoDA Report from CQ, the template you created appears in the "Generate ClearQuest Report" dialog box. There's no need to close and reopen a user db if it was already open. The templates are read dynamically when you choose to run a report.

Table of Contents





Export/import queries, charts, and reports; public and personal.
Updated: 05/06/11
Custom queries, charts, and reports only live in the user db in which they were created. If you have a large number of these that you'd like to "seed" a new user db with, you can simply export them from one and import them into another. Keep in mind that for the queries and such to actually run correctly, the schema in the new db, if not the same, will need to at least be similar to the old.

GUI export and import can be done in the fat Client only. Right-click on a query and choose Export. It will create a .qry file that can be moved elsewhere or emailed to someone.
You can also do it from the CLI. The following "bucket" tool will export to and import from a file (not user readable). You can type "bkt_tool -help" for more details. The bkt_tool.exe command lives in cq-home. By default, it will only export the Public queries. If the user is other than "admin", it will export all personal and public queries. Contrary to what the Admin Manual states, you do not have to have SuperUser privileges to do this; you merely need to have Public Folder Administrator rights to export/import the public queries. As an additional note, as an alternative to the -storagefile option, you can use the -directory option with an argument such as "C:\temp\admin". I'm not sure why one would choose one over the other.
  # bkt_tool -export -user cq-admin -password password -dbset old-schema-repo -dbname old-user-db -storagefile output-file

  # bkt_tool -import -user cq-admin -password password -dbset new-schema-repo -dbname new-user-db -storagefile output-file
NOTE: To get personal queries, use the undocumented -Bktusername (-B) option to specify the user's buckets to export/import. However, it appears this was only available up to CQ 2003. I'm not sure why a superuser can't transfer the queries of any user, but it appears that a user must execute the commands his/herself. That is, the -user and -password options must match the -B option. Anyway, I had dismal results in trying get personal queries to transfer over using bkt_tool with -B. While I've gotten it to work on occasion, it seems to fail to bring over the personal queries more often than it succeeds. I don't know what I did differently to make it work those times that succeeded.

NOTE: Ignore the error: "::SetRptBucketIDs Unknown Exception". It is unknown why this error occurs every import, but doesn't seem to affect anything.
NOTE: Even though the bkt_tool help lists the options with -dbname appearing before the -dbset option, for poor programming reasons, the -dbset option must come before the -dbname option on the CLI. If not, you'll get the error "No DBName Specified".

Email queries

If your intention is to get a query over to another user, you can email it to them. Simply right-click on the query and select Email.

Copying using the API

Even though the bkt_tool seems to have the ability to transfer personal queries, charts, and reports, it doesn't carry it off very well; see the above notes. The following code shows how to accomplish nearly the same thing in PERL utilizing the CQ API. Note that as of the 2003.06.15 release, the API only supports methods for queries and charts, but not for reports or report formats. The following is not a stand-alone script, but rather just shows the commands used to make copies of queries and charts. The code below doesn't overwrite existing items.
Here are a couple more notes on the API regarding this effort. There is a method called SaveQueryDef that does what the script below does. However, there is no corresponding method for charts, so the code below handles both the same way. Yes, you can simply pass that method the chart object and it will successfully create the chart, but the icon next to the chart will look like a query. Moreover, from then on, even though the chart will function correctly, any API calls to that object will return a query object; very strange. Note, as of CQ ver 2003.06.15, the SaveQueryDef method has a parameter to choose whether or not to overwrite an existing query, but that parameter doesn't seem to work no matter what it's set to.

WARNING: I discovered that using the 2003.06.15 API to create a query definition in an Access database has the following quirk. The InsertNewQueryDef method (seen below) successfully creates the query in the destination datbase. The query works fine when you execute it. However, if you try to create a Report or when using a parent/child form control to find a query by name, queries created with the API do not show up. Note that the exact same API calls were used on a destination database that is Oracle and the anomaly did not occur. So...

#######################
# Log into the source database.
print "$module: Logging into the source database ($src_db) at ($src_dbset) ...\n";
$sessionObj = CQSession::Build;
$sessionObj->UserLogon($src_login,$src_passwd,$src_db,$src_dbset);


#######################
# Retrieve the source qcr.
$workSpaceObj = $sessionObj->GetWorkSpace;
$workSpaceObj->SetUserName($src_login);
$x = 0;
foreach $item_type ("Query","Chart") {

	$get_list_method	= "Get${item_type}DbIdList";
	$get_obj_method		= "Get${item_type}DefByDbId";

	$listRef = $workSpaceObj->$get_list_method(2);
	foreach $dbid (@$listRef) {

		$item_pathRef	= $workSpaceObj->GetWorkspaceItemPathName($dbid,1);
		$item_name	= pop(@$item_pathRef);

		# Build the path.
		$item_path = "";
		foreach $item (@$item_pathRef) {
			if ( "$item_path" ) {
				$item_path .= "/$item";
			} else {
				$item_path = $item;
			}
		}
		print "$module: $item_type\t$item_path/$item_name\n";

		$item_info[$x][0]	= $item_type;
		$item_info[$x][1]	= $workSpaceObj->$get_obj_method($dbid);
		$item_info[$x][2]	= $item_pathRef;
		$item_info[$x][3]	= $item_path;
		$item_info[$x][4]	= $item_name;

		$x++;

	}
}
$nitem = $x;
print "$module: $nitem items\n";

CQSession::Unbuild($sessionObj);


#######################
# Log into the destination database.
print "\n$module: Logging into the destination database ($dest_db) at ($dest_dbset) ...\n";
$sessionObj = CQSession::Build;
$sessionObj->UserLogon($dest_login,$dest_passwd,$dest_db,$dest_dbset);
$workSpaceObj = $sessionObj->GetWorkSpace;
$workSpaceObj->SetUserName($dest_login);


# Build a list of existing queries and charts.
foreach $item_type ("Query","Chart") {
	print "$module: Building a list of existing $item_type definitions ...\n";
	if ( $item_type eq "Query" ) {
		$defRef = $workSpaceObj->GetAllQueriesList;
	} else {
		$defRef = $workSpaceObj->GetChartList(2);
	}
	foreach $def (@$defRef) {
		push(@existing_defs,$def);
	}
}


#######################
# Build the queries, charts, and reports.
for ( $x = 0; $x <= $nitem - 1; $x++ ) {

	$item_type	= $item_info[$x][0];
	$itemObj	= $item_info[$x][1];
	$item_pathRef	= $item_info[$x][2];
	$item_path	= $item_info[$x][3];
	$item_name	= $item_info[$x][4];
	$item_fullpath	= "$item_path/$item_name";

	print "\n$module: $item_type\t\"$item_fullpath\"\n";

	# Build the path.
	$path = "";
	foreach $folder (@$item_pathRef) {

		print "$module: $folder";
		if ( "$path" ) {
			$path	.= "/$folder";
		} else {
			$path	= "$folder";
		}

		# If this is the top of the tree, get the dbid and go to the next folder.
		if ( $folder eq "Personal Queries" ) {
			$dbidRef	= $workSpaceObj->GetWorkspaceItemDbIdList(2,3,0,"");
			$parent_dbid	= @$dbidRef[0];
			print ", root folder";
			goto NEXT_FOLDER;
		}

		# If the folder already exists, get the dbid and go to the next folder.
		# Unfortunately, there's no elegant way to ask if a folder exists,
		# and if it does, there's no elegant way to get the existing folder's dbid.
		$dbidRef = $workSpaceObj->GetWorkspaceItemDbIdList(2,3,$parent_dbid,"");
		foreach $dbid (@$dbidRef) {
			$pathRef	= $workSpaceObj->GetWorkspaceItemPathName($dbid,1);
			$fullpath	= "";
			foreach $pathpart (@$pathRef) {
				if ( "$fullpath" ) {
					$fullpath .= "/$pathpart";
				} else {
					$fullpath = "$pathpart";
				}
				if ( "$path" eq "$fullpath" ) {
					$parent_dbid = $dbid;
					print ", folder already exists";
					goto NEXT_FOLDER;
				}
			}
		}

		# Create the new folder.
		eval {
			print ", create new";
			if ( $preview ) {
				$parent_dbid	= "not created in preview mode";
			} else {
				$new_dbid	= $workSpaceObj->CreateWorkspaceFolder(0,2,$folder,$parent_dbid);
				$parent_dbid	= $new_dbid;
			}
		};
		if ( "$@" ) {
			print "$module: ERROR:\n$@\nExiting early ...";
			goto FINISH;
		}

		NEXT_FOLDER:
		print ", $parent_dbid\n";
	}


	# Does the item already exist;
	foreach $entry (@existing_defs) {
		if ( "$entry" eq "$item_fullpath" ) {
			print "$module: \"$item_name\" $item_type already exists.\n";
			goto NEXT_ITEM;
		}
	}


	# Create the item.
	$insert_method = "InsertNew${item_type}Def";
	if ( $preview ) {
		print "$module: $insert_method(\"$item_name\",$parent_dbid,$itemObj) not performed in preview mode\n";
	} else {
		eval {
			$workSpaceObj->$insert_method("$item_name",$parent_dbid,$itemObj);
		};
		if ( "$@" ) {
			print "$module: ERROR:\n$@\nExiting early ...";
			goto FINISH;
		}
	}

	NEXT_ITEM:

}


FINISH:
CQSession::Unbuild($sessionObj);

Table of Contents



Rename a query.
Only administrators can rename queries that live in the Public folder.
In the Client, simply right-click on the query and select Rename.
In the web interface, under the Operations menu, select Edit Query. In the pull-down list that comes up, choose the query to be renamed. Click on the Rename Query button.
NOTE: Don't put any double quotes in a query name in the web interface. If you do, you'll be unable to rename it to anything else via the web interface -- but you still can rename it in the Client. If you must rename the query that has double quotes in its name in the web interface, instead of selecting Rename Query, edit the query. Edit the existing query, but give it a new name in the process. The new one will have all the same properties. The old one will need to be deleted via the Client.

Table of Contents





Delete a query.
Updated: 03/27/18
Version: 8.0.1.14
Only Super User and Public Folder Administrator can delete queries that live in the Public folder.
In the Client, simply right-click on the query and select Delete.
In the 7.0.x style web interface, under the Operations menu, select Edit Query Definition. Select the query to be deleted from the pull-down list that pops up. Click the Delete Query button.
In the 7.1+ style web interface, simply right-click on the query and select Delete.

There is no way to delete an existing query or folder using the API.
However, you can use the following kludgy method to delete queries, perhaps if there are hundreds to be deleted that would be too tedious to do manually.
1) Delete ALL the public queries from a working/test database1.
2) Write a script that reads all the public queries in database2 (where the queries are to be removed).
3) Selectively import (rebuild) the queries into database1.
4) Delete ALL the public queries from database2.
5) Use the same script to copy ALL public queries from database1 to database2.

Table of Contents





Enable a report to display local time zone date/times.
Updated: 02/24/06
By default, a report using Crystal Reports will show times in GMT, which is how they are stored in the db. Follow these steps to have a report show the times in the local time zone.
CQ ships with Crystal Reports UFL "u2lcrratl.dll", that needs to be placed in "C:\Program Files\Common Files\Crystal Decisions\2.5\bin". This dll provides a function, "RATL_LocalDateTime(DateTime)", that converts an input datetime to local time zone format.

1) While authoring the Crystal Reports format through CQ , goto Report->Formula WorkShop on the menu bar in Crystal Reports.
2) In the Formula Workshop window, right-click on "Formula Fields" and click "New".
3) Enter the formual field name in the new dialog and choose "Use Editor".
4) Expand the "Functions" tree usually visible in the middle pane.
5) Expand the "Additional Functions" node and select "crratl" child node.
6) Two functions, namely "RATL_LocalDateTimeString(String)" and "RATL_LocalDateTime(DateTime)" should be visible .
7) Drag and drop the "RATL_LocalDateTime(DateTime)" function in formual editor pane.
8) Use your datetime field as a parameter to this function.
9) Save the formula .
10) Use this formula field in the report format instead of the database fields.

Once the above steps are carried out successfully, Crystal Reports will display the datetime information according to local time zone and hence will be synchronized with the query result display.

Table of Contents





Have a query run at startup.
Updated: 02/24/06
Users can choose to have query automatically run when they log in.
In both the web and Client interfaces, simply right-click on the desired query and choose "Run at Startup".

NOTE: Unfortunately, once a query has been chosen to run at startup, while you can choose a different query to run at startup, there is no simple way to NOT have a query run at startup. You can, however, tell it not to run the startup query via the URL:

website/main/?USE_CASE=cq_startup&setting=false

Table of Contents





View/edit the SQL equivalent of a query.
Updated: 05/06/11
Queries generated using the client or web interface tool are converted to SQL statements internally. To understand what the query is doing (looking for) internally it's possible to view the SQL query actually being run.
In the client interface, select View -> View SQL Pane. The SQL statement can be edited manually if desired. However, if modified that way, the GUI Query Editor will no longer work. Whether or not a user can see/edit the SQL Pane is controlled in the User Administration Tool. One of Privileges is called "SQL Editor".
If you don't have SQL Editor privileges, you can still edit the SQL query.
1) Right-click on the query and select Export.
2) Open the resulting .qry file in a text editor and modify the SQL statement.
3) Afer closing the text editor, rename the file to a different query name.
4) Right-click on Personal Folders and select Import.

Table of Contents





Add record counts to a report.
Updated: 03/13/06
It's possible to supply a record count in a report using Crystal Reports.
How to display the number of records in a report.
The following is a more complex example of counts by Project, by Release, by Severity supplied by Business Objects.
Project A
	Release A
		Severity High
			Record 1
			Record 2
			Record 3
			...
			--------------------
			Count of records
			--------------------
		Severity Medium
			Record 1
			...
			--------------------
			Count of records
			--------------------
		Severity Low
			Record 1
			...
			--------------------
			Count of records
			--------------------
	Release B
		Severity High
		...
			And so on. This continues for every combination
of project to release and each severity.
Place the fields you want to view in to the report.
1) Select "Insert" | "Group". Now "Insert Group" dialog box opens with drop down list of fields to be grouped on. Select the "Project" field and order in which it has to be grouped. Click OK. This will group the report based on the project.
2) Now select "Insert" | "Summary". This opens the "Insert Summary" dialog box. Select "Project" from the field list | "Count" from summary list | "Group #1 Project - A" from the summary location. Click OK.
3) Repeat the steps 2 and 3 for "Release" field and "Severity" field to achieve the count for every combination of project to release and each severity.
Refer the sample reports named "Summary Group" shipped with CR and will in the following location "C:\Program Files\Crystal Decisions\Crystal Reports 10\Samples\En\Reports\Feature Examples". This sample report is based on above steps.
You can also check the Business Objects Knowledge Base for solutions to issues similar to your own. You can find our knowledge base at Business Objects.

Table of Contents





Run a query on a date field to find those dated "yesterday".
Updated: 07/07/06
Whether in the Client or web interface, one can choose a filter on the date field in question and choose the built-in constant of "yesterday". However, when creating a query in a script, there is no direct way to refer to yesterday. The date corresponding to yesterday must be programmatically determined. In this example, the "time" portion of the date is ignored.
	use Time::Local;

	($day,$month,$year)	= (localtime)[3..5];
	$year			+= 1900;
	$month			+= 1;
	$month			= "0$month" if ( $month < 10 );
	$day			= "0$day"   if ( $day   < 10 );

	($day,$month,$year)	= (localtime(timelocal("0","0","0",$day,$month - 1,$year) - 86400))[3..5];
	$year			+= 1900;
	$month			+= 1;
	$month			= "0$month" if ( $month < 10 );
	$day			= "0$day"   if ( $day   < 10 );
	$yesterday		= "$month/$day/$year";

	...

	$AND_operator->BuildFilter("Date_field",$CQPerlExt::CQ_COMP_OP_EQ,["$yesterday"]);
Table of Contents



Sort an API query.
Updated: 05/12/10
Programmatic queries can be sorted.
	$queryFieldDefsObj = $queryDefObj->GetQueryFieldDefs;

	$idfield = $queryFieldDefsObj->ItemByName("id");
	$idfield->SetSortType($CQPerlExt::CQ_SORT_DESC);
	$idfield->SetSortOrder(1);

	$ownerfield = $queryFieldDefsObj->ItemByName("Owner");
	$ownerfield->SetSortType($CQPerlExt::CQ_SORT_ASC);
	$ownerfield->SetSortOrder(2);
Table of Contents



Determine a record's dbid.
Updated: 07/27/11
When selecting display fields for a query, the dbid field is not one of the choices. There's no reason it can't be there ... Rational??
In the client interface, the dbid is automatically displayed on each record at the very bottom-left corner of the form. Unfortunately, it's not displayed in the web interface.
If you want to know the dbid as a one-off case in the web, when a query is complete, simply select/copy all the records in the result set and paste that list into a text file. The dbid will be automatically listed in the left column of each row.
If you want to give users the ability to see the dbid more often, create a hidden field (not on a form) in the record type called something like Defect_dbid. Create a Default Value hook for that field that places the record's dbid into Defect_dbid. However, this will only put the dbid there when the record is first created. You'll need to create a script to backfill the Defect_dbid field for existing records. Users will then be able to include the Defect_dbid field in the query display.
The dbid can be found programmatically the normal way, as in:
	my $dbid = $entity->GetFieldStringValue("dbid");
Note that a dbid gets consumed for stateful and stateless records even if the Submit action is not committed.

Table of Contents





Edit SQL query language/code.
New in CQ2001, users can be given permission to edit the SQL code associated with a given query. Formerly, when a query was run, an "SQL editor" tab would appear under the result set. Now, to see that tab go to View->View SQL pane. All users will be able to view the pane, but only those with that privilege can edit it. The "SQL Editor" privilege is set via ClearQuest User Administration.

Table of Contents





Rename a field across all queries.
Updated: 08/16/16
Version: 7.1.2
If you, say, rename a short-string field called "Application" to "Application_old" and create a new reference field called "Application", all existing personal and public queries will refer to "Application_old".

You can't use bkt_tool. If you extract the queries from a database where the field has not yet been renamed in the schema, and import them into a database where the change has already been made, the queries will still point to the old field name, as the query object refers to the field def ID and not the name. Morever, the export file created by bkt_tool cannot be read and modified prior to import.

You can't use database SQL either. The queries are stored in the database in binary format, so the contents of the query are inaccessible.

However, you can search for a string's existence in both filter and display fields and then update the queries using the API.
WARNING: Be VERY careful with the following type of change. The replacement regular explression below needs to be more specific about what it's changing. For example, if replacing "id" with "name", you would incorrectly change "dbid" to "dbname". Be VERY careful with this type of replacement.
WARNING: The queries saved in the database do not contain any fields that are dynamic filters. Dynamic filter fields are only added to the query at run time, so the GetSQL below will not find them.
	$workspace_o = $session_o->GetWorkSpace;
	$listRef = $workspace_o->GetQueryList(3);
	foreach $path (@$listRef) {
		$querydef_o = $workspace_o->GetQueryDef($path);
		$sql = $querydef_o->GetSQL();
		if ( "$sql" =~ /$string1/ ) {
			$sql =~ s/$string1/$string2/g;
			$querydef_o->SetSQL($sql);
			print "$path\n";
		}
	}
  
The GetQuestList constants are:
0 = Don't retrun any queries (I'm not sure why you would use this)
1 = Public only
2 = Personal only
3 = Public plus personal

Alternatively, if a field needs to be renamed across many, many queries, you can import a complete set of queries from your production environment into your test environment using bkt_tool.
You still have to make manual changes to all the queries in the test environment. Then, on the day you push to production, you can export public queries from test and import them into production, which will then point to the new name. This is still a large manual effort in your test envnironment, but saves time when the push to production is made. You can use "GetSQL" (above) to search for the queries that need changing and just not do the "SetSQL" for this type of effort.

Table of Contents





Programmatically update and save a query.
Updated: 03/13/17
Version: 8.0.1.13
Queries can be created using the API. Once you've generated the desired query def object the normal way, it can be saved to a file for import into a different database or saved as a clickable query in the current database.
Save to an external (binary) query file.  Whatever the file is named
is what the query will be named when it's imported.
	$queryDefObj->Save("C:\\temp\\My Query.qry");

Save as a clickable internal query.  The "1" means to overwrite, if existing.
	$workspace = $session->GetWorkSpace();
	$workspace->SaveQueryDef("My Query","Personal Queries\\folder",$queryDefObj,1);

Table of Contents





Run a pre-defined query from a Perl script.
Updated: 09/21/11
Version: 7.0.1.12
Queries that are saved in a user db have a path and name associated with them, as in "Public Queries/Admin/MyQuery". If that path is known, that existing query can be run the same way a scripted query is run.
	$workSpaceObj	= $session->GetWorkSpace;
	$queryDefObj	= $workSpaceObj->GetQueryDef("Public Queries/Admin/MyQuery");
If there are dynamic filters associated with the query, they are ignored when the query is run programmatically.
You cannot add any more filters to the query. If you do, existing filters will be overwritten.
You can add fields to the display set. However, any fields you add will appear in columns before the pre-defined ones. That is, if you add a field to display, it will be in the first column and not the last column, as might be expected.
WARNING: When you run an existing query programmatically, it automatically returns the dbid in the first column. That is, all of your result set columns will be shifted by one.

Table of Contents





Have a query return only the latest history timestamp for each record.
Updated: 09/14/12
Version: 7.1.2
If you construct a query that returns the history.action_timestamp field, each record will be listed multiple times in the result set; once for each time the record was edited.
If you want to simply return only the latest time the record was edited, you'll need to edit the underlying SQL. You need to have the "SQL Editor" privilege to do that.
In the CQ fat client, create the query you want, including all the display fields and filters. Then, select View -> View SQL pane. You'll now see that the query editor has a fourth tab called "SQL editor". You can now edit the SQL query directly.
The following is a simple example of what change to make. Note the max() function and GROUP BY.
	# Before
	select distinct T1.dbid,T1.id,T4.action_timestamp from scr T1,history T4 where T1.dbid = T4.entity_dbid  and 16777433 = T4.entitydef_id  and (T1.dbid <> 0)

	# After
	select distinct T1.dbid,T1.id,max(T4.action_timestamp) from scr T1,history T4 where T1.dbid = T4.entity_dbid  and 16777433 = T4.entitydef_id  and (T1.dbid <> 0) GROUP BY T1.dbid,T1.id
WARNING: Once a given query has been edited using the "SQL editor" tab, it can no longer be edited using the "Query editor" or "Display editor" tabs. That is, from then on, you can only edit the query manually in the SQL code.

Table of Contents





Customize the column header name in a query's result set.
Updated: 05/06/13
Version: 7.1.2
Queries are run against the names of fields as they are defined in the schema. If a field has an un-intuitive name, the query result can be customized to show a better title for the column of data. Independent of which interface the change is made in, the customization is tied to the query definition in the database, so will appear changed in all the interfaces. However, if you are already logged into the database in a different interface, the change will not appear if you re-run the query. You must log out and back in again to pick up query definition changes made elsewhere.

ClearQuest for Windows Client
This is the older VB-style interface. Edit the query. In the "Display Editor" tab, simply click the row in the Title column that you want to change.

ClearQuest
This is the newer Eclipse-based interface. Edit the query. On the "Defined Display Fields" page, simply click the row in the Title column that you want to change.

CQ Web
Edit the query. In the "Query Presentation" pane, simply click the row in the Title column that you want to change.

Table of Contents





Group filters together in the eclipse client.
Updated: 06/07/16
Version: 7.1.2
Creating groups of filter fields in the eclipse client isn't as obvious as it could be.
1) Add the fields to the filter list that you want grouped.
2) Right-click on one of the fields and select "Use fieldname".
3) Right-click on the other field and select "Group second-field with first-field".
If the second field is already inside an AND/OR, right-click on the grouping instead or you will create a new sub-grouping.
4) To change the AND/OR, right-click on it and select the opposite name.

Table of Contents





Set the default record type.
When submitting a new record by using the New Defect button in the CQ Client, it doesn't ask what record type to use. The record type it uses is the default, set in the CQ Designer. In the Designer, after opening the appropriate schema, right-click on the record type desired as the default and choose "Default RecordType". A check-mark will appear next to that statement in that record type and not in any others.

Table of Contents





Create a new stateless record.
Once the stateless record type exists, in the CQ client, Actions -> New... -> record-type. Fill in the requested information. Now, when a user submits a new record, the new stateless record will be available to them, such as project name and information.

Table of Contents





Create a stateless record type.
Stateless record types are not to be confused with states. States are stopping points along the development life cycle. Stateless records merely store information that is not part of the life cycle.
In the CQ Designer, right-click on Record Types - Stateless, choose Add and give the stateless record a unique name. Open the new folder and add fields and actions. Remember to give the stateless record a unique key.

Table of Contents





Set the stateless record type unique key.
Updated: 12/22/15
Version: 7.1.2

Each state-based record type has a unique identifier based on the database name concatenated with an 8 digit number. Conversely, stateless records are uniquely identified by a key designated by the record's creator. Each stateless record type must have this unique key set before it can be used.

After the stateless record is created, in the CQ Designer, open the stateless record's folder in the Workspace pane and double-click on Unique Key. If the field chosen isn't enough to uniquely identify the stateless record, more than one field can be used.

Before CQ 7: When working programatically with records with multiple fields in the primary key, note that the fields values are listed in the order in which they appear in the Unique Key pulldown list, which is not necessarily alphabetical order.
CQ 7 and later: The unique key is still made up of fields in the order in which they were added to the record type. Unfortunately in the Eclipse-based Designer, when you right-click on the record type and select Unique Key..., the fields are listed in alphabetical order. This makes it indeterminate which order they are supposed to be in when referring to the record.

Each part of the primary key is space-separated. For example, if a record has two fields as part of its primary key that are Name = "Control" and Version = "1.0.3". The primary key would be "Control 1.0.3". Because the key's parts are space-separated, it's highly recommended that you do not use values with spaces in fields that make up a unique key, otherwise there is no way to parse the unique key parts programmatically.

If you add an additional key field to a record type that has existing records in the user db, CQ will complain that the unique key violates a uniqueness constraint. Basically, empty fields cannot be part of a unique key. To get around this, after creating the field, don't yet add it to the unique key list. Instead, push the schema to the user database, fill in the field on each record with a value that makes that record's key unique. Then, add the new field to the unique key list and push the schema again. However, that would only work for the first database. If you have multiple levels of databases, such as Dev, Test, and Prod, you'll need to push the schema out to all databases, fill in all the empty fields, and only then add the new field to the unique key list. So, if you need to add another unique key field, you may have to do it over two releases.

Table of Contents





Create a new record type.
In the CQ Designer, right-click on Record Types and select Add. Simply give the new type a unique name. It will automatically be populated with the other folders, such as Forms, States and Actions, etc...

Table of Contents





Create a new record type family.
Grouping stated (not stateless) record types into a family allows the CQ user to query across multiple records of different types.
In the CQ Designer, select Edit -> Add Record Type/Family, or right-click on the Record Type Families folder and select Add.
If the family was created via the Edit menu, in the Add Record Type dialog box, choose Family and give the Record Type a unique name. Either way, the name must contain only letters (upper or lower), underscores, and/or numbers. If the family was created via the Edit menu, the new family will be automatically added to the Record Type Families folder.
Members can be added by either right-clicking on the Members folder in the new family and selecting Add, or via Edit -> Record Type Family Members.
Double-click on the Fields grid in the new family. Right-click in the grid an select Add to add the fields on which you'll allow the users to query. The fields must be the same in all record types in the family, including Name and Type. If the field doesn't exist in one or more of the record types in the family, simply add them to those types. That is, the just added field does not have to be added to a form, it just needs to exist in all record types to get past the Add field function.

Table of Contents





Remove a record type/family.
Updated: 04/07/09
In the CQ Designer, Edit -> Delete Record Type/Family, or simply right-click on the type to be deleted and select Delete. There are no restrictions on deleting a family. Deleting a family does not delete the actual record types.
When a field is deleted from a record type, that column of data still exists in the database. When a record type is deleted, the table is removed, as is the entry in the "entitydef" table.
NOTE: When a record type is deleted, if there are existing records in the user database, the history associated with those records is auto-removed as well.

Table of Contents





Rename a record type/family.
Updated: 08/08/11
In the CQ Designer, Simply right-click on the type to be renamed and select Rename.

NOTE: If CQ is integrated with CC using UCM, the record type cannot be renamed after the first record/activity is generated.

WARNING: If you rename a record type after it has had a CQ package applied to it, you will not be able to upgrade the user database. If the record type was already renamed, check the schema back out, rename the record type back to the original name, check the schema back in, then upgrade the user database.

When renaming a record type, if you get a message saying something like "The database name is already in use", ignore it. The record type name may appear with the old name, but if you push the change to the user database and check the schema in and then back out again, you'll see that it was changed.

Table of Contents





Duplicate an entire record type.

DISREGARD: This doesn't work correctly :-(.

There is no CQ built-in tool for replicating record types. The following procedure is a bit of a kluge.
1) Open the schema for editing. Create a record type to hold the duplicated one. Check in the schema, note the new version of the schema, and exit the CQ Designer.
2) Use cqload exportintegration to do a partial export to extract the record type to be duplicated. For begin_rev, use "2", and for end_rev, use the lastest rev noted in the last step. Revision one only contains the default stuff that already exists due to step (1). If there are many revisions, be patient.
3) Use cqload importintegration to import the output file of exportintegration. Give the name of the new Record Type and the next version of the schema as arguments on the CLI.
4) Open the CQ Designer and you will have an exact copy of the record type under the new name.

Table of Contents





View a record's history.
Most records will have a tab called History as part of the Defect_Base record form. However, if that isn't present, while in the Client, you can still run a query to get a result set of the records you are interested in, select a record and then select View->History.
Unless created by an administrator, history does not have a form of its own. So, if you run a query on history itself, you'll get a result set that is the history for all records in the user database. The result set will not be separated by the records the history belongs to. The only way to view history for a specific record is to have a History tab or View->History.

Table of Contents





Remove a record's history.
As of CQ2001A, it isn't possible to remove history records using CQ without removing the defect records to which they are attached. However, unsupported, one can remove history from a SQL Server or Oracle database using vendor tools.

Table of Contents





Clone a record type.
Updated: 05/12/10
As of CQ2003, there is no way to create a new record type based on existing one. I.E. You cannot clone a record type.
Still true in CQ 7.0.1.

Table of Contents





Delete a record.
Updated: 11/28/17
Version: 8.0.1.14

The "Delete" action is a bundled part of every schema. In the client, simply click on the Action menu and select Delete. If the Delete action is not there, it probaby wasn't enabled for the current record type. As of CQ2001A, even if you do not have permission to Delete a record, the action will still show up.
Note that when an action of type DELETE is performed on a record:
- If it references another record (has REFERENCE{_LIST} links), the parent_child_links table is not cleaned up.
- If another record references it, the delete will fail.
- AuditTrailLog records are not deleted. A new entry is created showing the Delete action.
- The associated history records are removed.
In a script:
	eval {
		$return = $session_o->DeleteEntity($entity_o,"Delete");
	};
	if ( "$@" ne "" || "$return" ne "" ) {
		...
Table of Contents



Count history records.
In the CQ Client, it's a simple matter to get of a certain record type, such as Defects. This is accomplished by running a query against the record type's unique key. In the case of Defects a query on "id" with no filtering will produce a complete list of Defects. The lower-right corner of the Client will show the count.
Unfortunately, as of CQ 2202, a history record's unique key is not queryable. However, since every history record generated is associated with an action, query on "action_name" with no filtering. Ignore the warnings that there is no form to display history independent of its parent record. When the query is complete, the count will appear in the lower-right corner of the Client.
Alternatively, if the user knows SQL, commands can be run on the CQ database itself to count the number of History records.

Table of Contents





Export/import dynamic lists.
1
There are many different data types that can be exported and imported using the Export Utility GUI. However, dynamic lists must be exported and imported from the CLI. New in CQ 2001, use the importutil command to export and import dynamic lists to and from text files. When importing, the "list_name" cannot already exist.
  # importutil exportlist [-dbset dbset_name] db_login db_password db_name list_name output_text_file

  # importutil importlist [-dbset dbset_name] db_login db_password db_name list_name input_text_file
Table of Contents



Export records from a CQ database.
Updated: 05/01/12
There may be occasion to export all the records from a database, perhaps for import into a different database, perhaps if changing database vendors.
In the ClearQuest (eclipse) client, go to File -> Export -> Records and the resulting GUI should be self-explanatory. You can export a single file by right-clicking on a record in a query result pane and selecting Export.
Unfortunately, there is no built-in way to programmatically export records. If records need to be exported programmatically, you'll have to generate the export file with a custom script. Export some files using the Export Tool to see the expected format.
In older versions of CQ on Windows, you may still find Start -> Programs -> Rational ClearQuest Export Tool. When started, it will ask which database to work on. If the db is considered a "Test" db by CQ, that db will have to be converted to a Production db first via the Designer. That is, you cannot use the CQ Export Tool to export records from a "test" database.

Table of Contents





Retire a record type or family.
If you'd like to ensure a record type or family doesn't even show in list in the Client, but don't want to delete the record type or family, it can be "retired".
Unfortunately, there is no direct way to do this, such as making a user inactive. However, what you can do is create a "dummy" group that has no user members. In the record type's Actions matrix, set the Access Control for every action to User Groups and set the user group to the dummy group. Since nobody can Submit that type of ticket, it won't show under the New button in the Client.

Table of Contents





Submit a record via email.
Users can submit a record/ticket to CQ via email. This gives users that may not have access to the internal CQ system via a client interface a way to submit records. This only covers submission of and changes to records. See the Adminstrators Guide section called "Administering ClearQuest E-Mail -> Enabling E-Mail Submission Through the Rational E-Mail Reader".
An install of CQ is running a service called the "Rational E-Mail Reader". It runs as a Windows service called "Rational ClearQuest Mail Service". That service is also used by RequisitePro. By default, it runs as "Local System". If that service needs to access network resources, you'll need to change the Log On in its Properties dialog.
The Rational E-Mail Reader requires a dedicated e-mail account for each CQ user database. The email server must support SMTP or MAPI.
To configure the E-Mail Reader, run
  # CQ-home\mailreader.exe
After configuring the E-Mail Reader, restart the service.
For a new record to be accepted via an email submission, it needs to be in a specific format. Create and test an email format so that users can easily submit records using that template. If an email isn't formatted properly (including required fields and specific field values), the E-Mail Reader will kick it back to the submitter with an error message.
The Subject line must be:
record-type  action  record-ID
The record-type, such as "Defect", is mandatory for all submissions. The action, such as "Submit", is only necessary if a default wasn't specified when configuring the E-Mail Reader. The record-ID, such as "SAMPL00000014", isn't necessary if submitting a ticket for the first time. It's required if modifying an existing ticket.
The body of the email must conform to:
fieldname:value
If the the field is multiline, enclose the fieldname:value pair in curly brackets "{}". The right curly bracket "}" must be on a line by itself. Fields can be in any order. Email submission does not support attachments.
Examples:
Defect Submit

Headline: inventory report is not running correctly
Severity: 1-Critical
Project: ClassicProject
Priority: 2-Give High Attention
{Description: When running an inventory report, application crashes if
more than 50 items are included in the report.
}
Or, in the case where a default action of "Modify" and a default field of "Notes_Log" have been set up in the E-Mail Reader, a user could simply update the notes in a record with:
Defect SAMPL00000017

{Today we posted a patch for this problem on the company intranet.
Please download the patch to solve your customer’s problem.
}
Table of Contents



Display a record from an external script.
The following code snippet works, but is an unsupported use of the tool. It will bring up a desired form without explicitly launching CQ. However, note that it does actually load CQ in memory to access the ticket, so it's not very fast.
$username    = "ejo";
$password    = "";
$connection  = "2003.06.00";
$db          = "mydb";
$record_type = "Defect";
$id          = "mydb00000005";

use Win32::OLE;

$CQ      = Win32::OLE->new('ClearQuest.Application2') || die('cant get app, ' . Win32::OLE->LastError() . "\n");
$Session = $CQ->GetSession() || die('cant get session, ' . Win32::OLE->LastError() . "\n");
$Session->Logon($username,$password,$db,$connection);
$Form    = $Session->CreateForm($record_type, , ,$id);
$Form->StartForm();

exit 0;
Table of Contents



Ensure consecutive record IDs.
If a user is in the process of submitting a record and then cancels the job prior to submission, a record ID gets used up anyway. Unfortunately, as of CQ 2003 there is no way to control that feature. However, one can set up a pseudo record tracking ID that can be controlled and guaranteed sequential. The following will create a sequential number on a per Project basis. It assumes you have a field called "Project" in your record type.

WARNING: Having a custom sequential number sounds like a good idea up front, but has a down side too. The built-in functionality of the "Duplicate" and "Unduplicate" actions is built around the CQ ID. That functionality can be reproduced, but it takes some work.

In the CQ Designer:
1) Create a stateless record type called "IDCounter".
2) Add two fields to the IDCounter stateless record, one called "LastID" (type INT) and one called "Project" (type SHORT_STRING).
3) Create a new form called "Counter" and place the two new fields in a tab called "General".
4) Make the IDcounter stateless record type unique key the Project field.
5) Create a new action for the IDCounter stateless record called "Modify" with a Type MODIFY. You can also add a "Delete" action if desired. Also, if not already done, create a user group called something like "cq_admins" and place only the CQ administrator in that group. Lock down ALL actions in the IDCounter record type to that group. Nobody should be creating or modifying IDCounter records except the CQ admin.
-----
6) Back in the regular stated record type, add a field called "ConsecutiveID" (type INT). Add it to forms as appropriate. Change the field to READONLY for all states in the Behaviors matrix.
7) Add a Validation hook (ACTION_VALIDATION) to the Submit action in the Actions matrix. The Perl code might look like the following (don't set $result to anything):
# Get the stateless record's unique key.
# In this case, it's based on the name of the project.
  $key = $entity->GetFieldValue("Project")->GetValue;
# Set a short-cut to the current session.
  $session = $entity->GetSession;
# Get the correct IDCounter entity based on the unique key.
  $IDEntity = $session->GetEntity("IDCounter","$key");
# Read the current value for LastID.
  $LastID = $IDEntity->GetFieldValue("LastID")->GetValue;
# Open the IDEntity for modification.
  $session->EditEntity("$IDEntity","Modify");
# Increment the counter.
  $newcount = $LastID + 1;
  $IDEntity->SetFieldValue("LastID","$newcount");
# Check the validity of the change.
  $IDEntity->Validate;
# Commit the new value to the database.
  $IDEntity->Commit;   
# Set the new value in the current entity.
  $entity->SetFieldValue("ConsecutiveID","$newcount");
An alternative to the above script is to create a sequence number that has the current year imbedded with the sequence reset annually. Create the IDCounter record type as described above, with the addition of a field called "Current_Year" of type (INT) whose default value is set to the current year. Note that the field called "LastID" is called "Last_Numeral" here. Note also that the field in the stated record type has been called "Log_Number" of type SHORT_STRING instead of "ConsecutiveID" of type INT. Another difference here is that in the example above, the unique key for each IDCounter record was the name of a project set in the stated record. In the following, the unique key is the stated record type. That is, the above example created a unique sequence number for each project, while this one creates a sequence number for each record type.
	# This routine will give a submitted record a log number that
	# looks like YYYY-nnnn, where YYYY is the current 4-digit year
	# and nnnn is a sequential number that is reset each year.
	# Example:  2004-0012
	# It's purpose is to ensure records have serial numbers without
	# gaps.  If the CQ id is relied upon, the numbers can have gaps
	# because the id is assigned to a record before the record is
	# committed to the database.  If the user cancels the submission
	# before committing it, the CQ id sequence will have a gap.

	my $log_number;

	# Get the correct IDCounter entity based on the unique key.
	# It is assumed that there exists an IDCounter stateless record
	# entity named for each record type.  Open the IDCounter record
	# for edit.
	my $session       = $entity->GetSession;
	my $entitydefname = $entity->GetEntityDefName;
  	my $IDEntity      = $session->GetEntity("IDCounter",$entitydefname);
	if ( ! "$IDEntity" ) {
		$result = "There is no IDCounter record whose Unique Key is ($entitydefname).\n";
		goto LOG_NUMBER_END;
	}
  	$session->EditEntity($IDEntity,"Modify");

	# Get the stored and current years.
	my $stored_year = $IDEntity->GetFieldValue("Current_Year")->GetValue;
	my $this_year   = (split(/\-/,GetCurrentDate))[0];

	# This is the same year.
	if ( "$stored_year" eq "$this_year" ) {

		# Get the stored last number and increment it.
		my $numeral = $IDEntity->GetFieldValue("Last_Numeral")->GetValue;
		$numeral++;

		# Create a padded 4-digit number out of the numeral.	
		$numeral = "0$numeral" if ( $numeral < 1000 );
		$numeral = "0$numeral" if ( $numeral < 100  );
		$numeral = "0$numeral" if ( $numeral < 10   );

		# Set the new numeral in the IDCounter.
		$IDEntity->SetFieldValue("Last_Numeral",$numeral);

		$log_number = "${this_year}-$numeral";

	# This is a new year.
	} else {

		# Reset the year and numeral in the IDCounter.
		$IDEntity->SetFieldValue("Current_Year",$this_year);
		$IDEntity->SetFieldValue("Last_Numeral","1");

		$log_number = "${this_year}-0001";
	}

	# Check the validity of the IDCounter record and commit the change.
  	$IDEntity->Validate;
  	$IDEntity->Commit;

	# Set the new value in the current entity.
  	$entity->SetFieldValue("Log_Number",$log_number);

	LOG_NUMBER_END:

}
In the CQ Client:
1) Submit a new IDCounter stateless record.
1a) Example 1: Enter the name of a Project and set LastID to 0. You've just initialized the IDCounter for that project.
1b) Example 2: Enter the name of a record type as the unique key, set the Current_Year and set the Last_Numeral to 0. You've just initialized the IDCounter for that record type.
2) Open a new change request and test the results. Note, however, that this test should be done in a test database. That is because the whole point of having these numbers is that there isn't a break in the sequence. If you perform the test in the actual user database, you'll probably wind up deleting that record, which would leave a gap in the sequence :-(.

Table of Contents





Open a record for edit from a URL.
Updated: 07/12/07
New in CQ 7.0.
http://publib.boulder.ibm.com/infocenter/cqhelp/v7r0m0/index.jsp?topic=/com.ibm.rational.clearquest.user_web.doc/c_shortcut_create_lo_tasks.htm
Drill down to "Creating record shortcuts" -> "Creating a shortcut to modify a record".

Table of Contents





Programmatically add an attachment.
Updated: 02/17/11
Version: 7.0.1.12
$entity->AddAttachmentFieldValue("Attachments","$full_path","$description");

Table of Contents





Update multiple records at once in a user db.
Updated: 08/08/11
Version: 7.0.1.12
In the client interface, if you select multiple records in a query's result set, you can perform the same action on all of them. Whatever change you make to the first record in the selected set will be applied to the rest of the records.
This capability was not available in the old web interface, but it is in the new web interface with limitations. Unlike the client, the new web only allows you to edit fields with text box form controls. You can't edit multiline fields, checkboxes, etc..
Be careful when editing multiple records in batch mode. If a field sets a secondary, hidden field whose value is not appropriate for the other records, the other records will get updated with that value too. The same is especially true if the edit causes the creation of a supporting, stateless record. Unfortunately, I don't know of a clever way to stop users from editing files in batch mode when those limitations/risks exist in a schema. It would be useful if the CQ API could detect a modification being made in "batch" mode.

Table of Contents





Delete history records.
Updated: 11/27/17
Version: 8.0.1.14
The "installutil scrubhistory" command will delete history records.
The "-show" option will only report what would have been deleted.
Example:
	installutil scrubhistory dbset user password userDB recordtype
 (-all | { -action  | -modify-only | -before date } ... ) [-verbose] [-show]

	installutil scrubhistory DEV admin adm123 BUGS Defect -before 2014-01-01 -verbose
Table of Contents



Get a record type's database table name.
Updated: 06/18/18
CQ version: 8.0.1.14

Designer
Unfortunately, there doesn't appear to be a Schema Designer method for getting the corresponding table name.

API
Unfortunately, there doesn't appear to be an API method for getting the corresponding database table name.

Manually
The only way I've found that it can be done is manually. You have to log into the database itself and inspect the table you suspect is the correct one. Compare field and column names and data values to be sure.

Table of Contents





Record templates.
Updated: 02/26/19
CQ version: 9.0.1.4

At the bottom of all new records, you’ll see a field called “Template”. That is built-in ClearQuest functionality and not part of the custom schema. A template can be used to populate a set of fields with desired values and then saved. This is handy if you have to create many records that have similar field values. Templates are personal.
In the client interface, the template information is stored in “C:\Users\\.Rational\ClearQuest\rcp\.metadata\.plugins\com.ibm.rational.clearquest.ui”.
In the web interface, the template information is stored in the database.

Table of Contents





Change the state of a record using SQL.
Updated: 05/31/19
CQ version: 9.0.1.4

The state of a record is normally changed using one of the standard interfaces. However, if a record isn't editable for some reason, the state can be changed using SQL to get it out of the way of queries and reports. A record can become uneditable due to corruption, or too much data, or too many history records, etc...
Note that pdsql that comes bundled with CQ does not allow this sort of modification. You'll need to use a database tool associated with your database vendor.
NOTE: The name of the database table may not be the same as the name of the record type as you know it in the schema. Unfortunately, the schema doesn't tell what table name is, like it does for field/column names.
NOTE: The dbid values determined in the first two commands are unique to each database.

# Get the dbid of the record type.
SELECT id FROM entitydef WHERE name = 'record-type-name';
Ex: SELECT id FROM entitydef WHERE name = 'Defect';

# Get the dbid of the new state.
SELECT id FROM statedef WHERE entitydef_id = record-type-dbid and name = 'state-name';
Ex: SELECT id FROM statedef WHERE entitydef_id = 16777224 and name = 'Cancelled';

# Change the state.
UPDATE record-type-table-name SET state = state-dbid WHERE id = 'record-id';
Ex: UPDATE defect SET state = 16777349 WHERE id = 'SAMPL00000014';
  
Table of Contents



General.
Updated: 12/27/12
Version: 7.1.2
RPE = Rational Publishing Engine
Nothing yet. I plan on playing with this soon.

Table of Contents





Create a new schema.
The schema determines the look of forms and the life cycle of records submitted to a database. To create a new one, enter the CQ Designer, Cancel the initial request to open a schema, then File -> New Schema... and follow the prompts. You must base your new schema on an existing one. The schema name can only contain letters (upper or lower case), numbers and underscores.
If this is the first schema ever, CQ will ask for the name of the schema repository. There can be only one repository per CQ server install.
NOTE: If you're going to associate this schema with a new database and associate with an MS Access database, the wizard will prompt for the name and create it for you. If associating with a new database other than Access, the database will have to be created separately. In that case, skip the last prompt for a database name, close the schema and then create the new database with which you would like to associate before making any modifications.
NOTE: New schemas cannot be created from client installs.

Table of Contents





Remove/delete a schema.
Updated: 07/20/06
Once a database and schema are created and connected, they are inseparable. One must first delete the associated database. In the CQ Designer -> File -> Delete Schema... -> select the schema -> Delete.
In the Designer, a database can be Undeleted only if the physical database has not yet been removed. That is, the deletion is merely a logical deletion of the db name from the schema repo.

Table of Contents





Rename a schema.
Updated: 07/20/06
Unfortunately, there is no way to rename a schema within CQ. However, there is a kludgy way to rename one, but it only works before the schema has been pushed to any databases. That is, any database that has the schema under the old name will not be able to be upgraded with the same schema under the new name.
1) Export the schema from the source schema repo to a text file using cqload exportintegration.
2) In the text file, replace references to the schema with the new name. Don't use MS Word, as it will mess up line terminations. I've found it best to use Wordpad. Be sure to avoid replacing the "schema name" in parts of the schema that use the same word, but not the name of the schema. That is DO NOT perform a global find-and-replace. The schema name will be embedded in lines that look like:
...
PACKAGE_USAGE ( "Enterprise", 1 , "Notes", "4.0", "", "")
...
ADD master_schemarevs  ( "Enterprise", 3 , "", UNRESTRICTED_SCHEMA , 24 )
...
etc...
3) Import the schema into the destination schema repo using cqload importintegration.

Table of Contents





Import a schema.
The following CLI utility requires the schema to be in a file in a format defined by exporting a schema via "cqload exportschema".
The cqload command lives in the cquest-home-dir. However, the CQ install does not update your system Path environment variable to include that path. You need to have Super User priveleges in CQ to execute cqload. Yes, the login and password need to be written on the CLI. If there is no password, use empty double-quotes "". No interaction is necessary. You'll see the newly imported schema next time the CQ Designer is started. If it's already started, close it for the duration of this operation.
  C:\ cqload importschema -dbset schema-repo login password schemafile.txt
Instead of -dbset, you can also set the schema repository's name in an environment variable called "BB_TEST_DBSET_NAME".
NOTE: The name by which the newly imported schema will be known was set during the exportschema phase. If a schema already exists with that name, the import will not work. To rename a schema for import into the same schema repository, open the schemafile.txt file and do a search and replace on all instances of the old schema name. That is, go through the export file and change the name everywhere it appears.

To import a schema version:
The schema being updated cannot be checkedout. The integ_version must be the next unused version number for that schema. Since all changes to the schema will be imported, it's unknown why you need to supply the primary record type's name. The integ_name will be part of the comment applied to the new schema version. The comment that will be automatically applied to the new schema version will be "Apply integ_name integration version integ_version".
  C:\ cqload importintegration -dbset schema-repo login password schema_name record_type integ_name integ_version import_file form_name
CQ-login CQ Designer login name that has Super User priveleges.
password Password for CQ-login. If none, supply empty double-quotes "".
schema_name The name of the schema into which you will import.
record_type The primary record type for the schema.
integ_name This is a name (comment) applied to the version being imported.
integ_version The version of schema_name you are loading.
import_file The .txt file containing the data exported by exportintegration.
form_name (Optional) The form to which new tabs will be added. If none, supply empty double-quotes "".

Table of Contents





Export a schema, version of a schema, or record type.
Updated: 10/07/11
The cqload command lives in the cquest-home-dir. However, the CQ install does not update your system Path environment variable to include that path. You need to have Schema Designer priveleges in CQ to execute cqload. Yes, the login and password need to be written on the CLI. If there is no password, use empty double-quotes "". No interaction necessary.
  C:\ cqload exportschema -dbset schema-repo login password schema_name schema_name.txt
Instead of -dbset, you can also set the schema repository's name in an environment variable called "BB_TEST_DBSET_NAME". Note that exportschema will export ALL versions of the schema to one giant text file.

WARNING: If you export a schema and there is a checkedout version in the source repository, the export file will contain a reference to the checkedout version that is empty. For example, if the last checkedin version of the schema is 155 and there is currently a checkedout version 156 with numerous changes, when you export the schema, the export file will contain a reference schema version 156, but the file won't contain any of the changes. The main problem with this is that when you import the schema into a different schema repository, the schema will show that it has 156 versions. This has two downsides. 1) An administrator looking at the schema in the destination repository will think they have the changes in version 156, when they actually don't. 2) If you subsequently attempt an export of version 156 from the source repository to update the destination respository, the destination repository won't import the changes because it thinks it already has version 156. The upshot here is, simply ensure the latest version of the schema is checked in before running the exportschema command.

To export a partial schema (one or more versions) of a single record type, use:
  C:\ cqload exportintegration -dbset schema-repo login password schema_name begin_rev end_rev record_type schema_name.txt
To export a single version of an entire schema, you have to create a new schema based on the version to be exported. Don't connect it to a database during creation, so that it can be more easily removed later. Then, use "cqload exportschema" to export that new schema.

To import a partial schema, use importintegration.

Table of Contents





Determine databases associated with a schema.
With the schema open for editing in the CQ Designer, go to View -> Schema Summary... and double-click on the Schema Name.
Alternatively, when the CQ Designer is first opened, if the window pops up requesting which schema to open for edit, one of the columns to the right tells which, if any, database is associated with that schema.

Table of Contents





Determine the schema repository database.
In the Designer, go to Database -> Database Properties... and look at the Properties of the Logical Database Name called MASTR (standard name). It will tell with what Physical Database Name it is associated.

Table of Contents





Edit the same schema in two different schema repositories.
This is useful if a CQ admin wants to work on the schema away from work, which is presumably where the official schema repo lives. However, the success of this scenario hinges on the fact that the schema can only be edited in one of those schema repos at a time. This is a VERY important caveat.
Export the schema from the real schema repo using exportschema. The schema being "replicated" must be in a checkedin state. Create a new schema repo on the machine to be taken away. It only needs to be in MSACCESS. Import the schema into the newly created schema repo using importschema. Edit the schema in the "remote" repository and check it back in. Export the schema version(s) from the remote repository using exportintegration. Be sure to include all versions created in the remote schema repo since the last import. Import the schema version(s) into the real schema repo using importintegration. You can then use this method to go back and forth between the two.
Note: Even though you may have created more than one version while working on it in the remote repository, all those versions are imported into a single verion back in the real repository. For this reason, the actual version numbers that you work on in either repository may diverge. But, those numbers aren't critical as long as you always include in an export ALL the versions, and ONLY the versions, created since the last import.

Table of Contents





Delete a schema version.
Deleting old schema rev's has a dramatic positive impact on certain CQ operations, like checking out the schema (which creates a new version of that schema) and a few other operations. So in general this is a good idea. It's possible to delete any schema rev that does not have a user database associated with it; however, there is a bug in CQt that makes it advisable to do this with extreme caution... read on for details.
The bug has to do with the tracking of schemas and the packages that they include. If you have applied any packages to your schema, that fact is tracked in a table in the database. It turns out that that information is important especially when upgrading the databases to a new version of CQ (and thus, a new version of the package). CQ needs to know which versions of which packages are applied to the schema in order to apply newer package versions to it.
The bug is that if you delete a schema-rev which is the revision at which you originally applied the package, then CQ loses track of the fact that the package is applied to the schema. Once that happens, you generally can't upgrade that schema to a new version of CQ because CQ doesn't realize that the old version of the package is applied to the schema and thus, will not let you apply any new versions of the package to it. It's pretty messy actually.
But if you are willing to do a little homework and research the schema-rev's at which packages were applied and be sure that you're not deleting those, then it is in general safe and advisable to delete old schema rev's.

Table of Contents





Embed an instruction manual.
Updated: 04/18/11
There are a few ways you can have CQ end-user instructions. This is useful if opening up the CQ db to a large, non-technical user base.
1) Add a "static text" box that points to a location on your network where the instructions can be found. This has the downside that the user would have to copy and paste the URL into a browser, as you can't double-click on a static text box. The upside is that the URL is part of the schema, so if changed, is automatically changed across all existing and future records.
2) Have the instructions available on the web. Create a read-only text field whose static value is a URL to the instructions. Starting in CQ 2002 users can double-click on a URL that launches their default browser. However, this has the downside that if the URL ever changes, all existing records will need to be backfilled with the new URL.
3) Embed the instructions, if there aren't too many, into the schema. Create a stateless record type called "Instructions". Create a dummy field to be the unique key, give it a default value, but don't display it on a form. There will only be one instructions record. Create a set of fields called "Instruction_page1", "Instruction_page2", etc... of type MULTILINE_STRING. On the form have one large multiline field per tab. You can then have tabs that are named for different instruction subjects, such "Submit a record". If users are entering via the web interface, you can set a start-up query that ensures the instructions are the first thing the user sees.
4) On a form, access the Properties of each field and fill in the Help text box.

Table of Contents





Build, utilize, and refresh session variables.
Updated: 02/23/06
If you need to populate a pulldown list with data derived from many records of a different record type, you need to run a query at run time. If the same data is needed in several places on a form, or possibly by other record types in that session, the query can be run once up front, the data placed into a session variable, and the pulldown list populated by reading the session variable array. Because the query is only run once and the data stored in an array, performance can be improved.
However, be careful. Since the data is only refreshed when the session is first started, if you have any utilities out there that are running as services, the session may never get restarted unless you make allowances for that.
Since the data is intended to be used by multiple record types, the code should be placed in a Global Script. Because this data would be needed for all actions, the global script should be called from the Initialization hook of a BASE type action.

	' ''''''''''''''''''''''''''
	' Run the global script to build an array of user data.
	GatherUserData()

In this setup, the query inside the global script gets run when the user performs any action, other than just viewing a record. Once the data is in the session variable, it won't be updated unless the user logs out and back in. That is, the data in the variable is static for the rest of the session.
Note that session variable, while making the code convenient and maintainable inside the schema, have limit performance improvement in the web. Even though this data is stored in a session variable, the data is actually stored server side. That means, that each time the data is needed, the web client has to go back to the server get the data. If there is a great deal of data in the variable, the wait time is almost as long as running a query to get the data.
See the paragraph below the code example for a discussion on refreshing the session variable while still logged into a session.

Any field's Choice List hook can run code similar to the following to populate its pulldown list.

	' '''''''''''''''''''''''''''
	' Present a list of user fullnames.
	dim x
	userData = GetSession.NameValue("UserData")
	for x = 0 to ubound(userData) - 1
		fullname = userData(x,2)
		choices.AddItem(fullname)
	next

	' Add a blank so that the field can be cleared.
	choices.AddItem(" ")

The code below is a VBScript example of placing user information into a session variable.


sub GatherUserData ' '''''''''''''''''''''''''''''''''' ' This routine will query for Active User records ' and place information into a session variable ' called UserData. Display fields can be added to the query, ' but their order or removal cannot be done without changing ' all hook scripts that retrieve the data. Those scripts expect a certain ' piece of datum to be at a certain indice. ' While gathering the information, turn the list of groups (which are presented ' on separate lines in the result set) into a comma-separated list. ' This will also set a element 7 to indicate whether or not the user is a ' an "admin" group member. ' 0: is_active ' 1: login_name ' 2: fullname ' 3: phone ' 4: email ' 5: comma-separated list of groups ' 6: 1 = TRM, 0 = non-TRM set sessionObj = GetSession record_count = 1 if sessionObj.HasValue("UserData") then countArray = sessionObj.NameValue("UserData") record_count = ubound(countArray) end if if not sessionObj.HasValue("UserData") or record_count = 0 then dim dataArray(), x, y, previous_login, group_list, group set querydef = sessionObj.BuildQuery("users") querydef.BuildField("is_active") querydef.BuildField("login_name") querydef.BuildField("fullname") querydef.BuildField("phone") querydef.BuildField("email") querydef.BuildField("groups") set operator = querydef.BuildFilterOperator(AD_BOOL_OP_AND) operator.BuildFilter "is_active", AD_COMP_OP_EQ, "1" set resultset = sessionObj.BuildResultSet(querydef) resultset.EnableRecordCount resultset.Execute redim dataArray(ResultSet.RecordCount,6) x = 0 previous_login = "" while (resultset.MoveNext = AD_SUCCESS) if previous_login = resultset.GetColumnValue(2) then group = resultset.GetColumnValue(6) if group = "ETM" or group = "TERM" then dataArray(x - 1, 6) = 1 end if dataArray(x - 1, 5) = dataArray(x - 1, 5) & "," & group else for y = 0 to 5 dataArray(x, y) = resultset.GetColumnValue(y + 1) next group = resultset.GetColumnValue(6) if group = "admin" then dataArray(x, 6) = 1 else dataArray(x, 6) = 0 end if previous_login = resultset.GetColumnValue(2) x = x + 1 end if wend ' Create the session variable. sessionObj.NameValue "UserData", dataArray end if end sub

Because the data stored in a session variable is static for the rest of the session, it can make it difficult for testing when records need to be changed and results verified in real time. That is, if performing testing and you need to modify the data in a record whose data is stored in a session variable, you won't see the change elsewhere unless you log out and back in, which can be a major pain during testing.
To get around this, create an action for each session variable that you want to reset during the session. The action needs to be of type RECORD_SCRIPT_ALIAS and point to a record script that contains the following code. Becuase the data array is emptied, when the global script is called, it will run a query and repopulate it with fresh data. Yes, there are probably other ways to accomplish this same type of data refresh that don't involve creating a bunch of new actions.

	' '''''''''''''''''''''''''''''
	' This is just a record script front end for the global script.
	GetSession.NameValue "ApplicationData", array("")
	GatherApplicationData()

Table of Contents





Upgrade a schema version in a different dbset without CQ MultiSite.
Updated: 06/08/06
If the schema repository is located in a different city/timezone/country, the login time to the CQ Client can be very long. Normally this is solved by replicating both the schema repository and the user database to a local server via CQ MultiSite. However, if you don't need to keep the user db in sync, perhaps because the users in the different cities use the schema but don't share record data, you can have a local schema repository for the users and keep the versions of the schema in sync without using CQMS.
The following commands will export and then import versions of schema. The export command can import a range of versions into one file. However, the import command must be run one version at a time. In theory you should be able to simply give the import command the highest version to which you want to upgrade, but I haven't had any luck with that. In fact, I've found sequence errors in the text file when multiple versions are done at once. For that reason, I place the export and import commands in the PERL script and did them one version at a time. You can type "cqload exportintegration" or "cqload importintegration" to get a usage statement. Also, the admin manual has a section on these commands.
# This PERL script will export the designated schema versions from the designated
# source schema repo (dbset) and import them into the designated destination schema repo.

# You must log in as a superuser in both the source and destination schema repos.

# The destination schema repo cannot be "Defect_Tracking".

# It assumes the schema name is the same in both dbsets.

# Eric J. Ostrander
# Updated: 06-06-06


#######################
# Initialize some stuff.
use CQPerlExt;
use Getopt::Long;
$script		= (split(/\\/,$0))[-1];


$usage = "cqperl $script
\t--sl\t<source-login>
\t--sp\t<source-passwd>
\t--sdbs\t<source-dbset>
\t--sch\t<schema-name>
\t--bv\t<begin-schema-version>
\t--ev\t<end-schema-version>
\t--dl\t<destination-login>
\t--dp\t<destination-passwd>
\t--ddbs\t<destination-dbset>
\t[--preview]";

if ( scalar(@ARGV) < 16 ) {
	print STDERR "\nUsage error:\n$usage\n\n";
	exit 1;
}
$preview	= 0;
GetOptions(	"sl=s"		=>	\$src_login,
		"sp=s"		=>	\$src_passwd,
		"sdbs=s"	=>	\$src_dbset,
		"sch=s"		=>	\$schema_name,
		"bv=i"		=>	\$begin_ver,
		"ev=i"		=>	\$end_ver,
		"dl=s"		=>	\$dest_login,
		"dp=s"		=>	\$dest_passwd,
		"ddbs=s"	=>	\$dest_dbset,
		"preview"	=>	\$preview ) || die "\n$usage";

if ( "$src_dbset" eq "$dest_dbset" ) {
	print STDERR "\nThe source and destination dbsets cannot be the same!\n\n";
	exit 1;
}

if ( $preview ) {
	print "Preview mode is ON.\n";
} else {
	print "\nWARNING: preview mode is NOT on!\n\n";
}

# The destination dbset cannot be "Defect_Tracking".
if ( "$dest_dbset" eq "Defect_Tracking" ) {
	print STDERR "\nThe destination dbset cannot be \"Defect_Tracking\"!\n\n";
	exit 1;
}


#######################
print "\nDetermining if destination schema ($schema_name) even needs to be upgraded ...\n";
$adminSession = CQAdminSession::Build;
$adminSession->Logon($dest_login,$dest_passwd,$dest_dbset);
$schemasObj = $adminSession->GetSchemas;
for ( $x = 0; $x < $schemasObj->Count; $x++ ) {
	$schemaObj	= $schemasObj->Item($x);
	$schemaName	= $schemaObj->GetName;
	if ( $schemaName eq $schema_name ) {
		$schemaRevsObj	= $schemaObj->GetSchemaRevs;
		$schemaRevObj	= $schemaRevsObj->Item($schemaRevsObj->Count - 1);
		$schemaRev	= $schemaRevObj->GetRevID;
		last;
	}
}
if ( $schemaRev >= $end_ver ) {
	print "($schema_name) schema version in ($dest_dbset) is already ($schemaRev), so doesn't need to be upgraded.\n";
	goto FINISH;
} else {
	print "($schema_name) schema will be upgraded from ($schemaRev) to ($end_ver).\n";
}


#######################
# Loop through the versions to be exported and imported.
for ( $ver = $begin_ver; $ver <= $end_ver; $ver++ ) {


	$temp_file = "${schema_name}_$ver.txt";
	if ( -f "$temp_file" ) {
		system("del $temp_file");
	}


	#######################
	# Export the version.
	$command = "cqload exportintegration -dbset $src_dbset $src_login $src_passwd $schema_name $ver $ver \"\" $temp_file";
	eval {
		print "\n$command\n";
		if ( ! $preview ) {
			$output = `$command`;
		}
	};
	print "$output\n";
	if ( $@ || "$output" =~ /ERROR: exportintegration FAILED/ ) {
		exit 1;
	}


	#######################
	# Import the version.	
	$command = "cqload importintegration -dbset $dest_dbset $dest_login $dest_passwd $schema_name \"\" \"\" $ver $temp_file \"\"";
	eval {
		print "\n$command\n";
		if ( ! $preview ) {
			$output = `$command`;
		}
	};
	print "$output\n";
	if ( $@ || "$output" =~ /ERROR: importintegration FAILED/ ) {
		exit 1;
	}


	if ( ! $preview ) {
		system("del $temp_file");
	}

}


FINISH:
exit 0;
	
Table of Contents



Determine who checked out a schema.
Updated: 10/18/06
In the CQ Designer, go to File->Open Schema. Locate the schema in the resulting pop up box. The fourth column will tell you what CQ User has the schema checked out. If the CQ User is a generic one such as "admin", highlight that entire row and copy and paste it into file. There is a hidden column at the end that will tell you what Windows login checked the schema out as admin.

Table of Contents





Check in a schema checked out by a different user.
Updated: 01/30/09
If a person has a schema checked out and is unavailable to check it back in, anyone else attempting to check it out will receive an error message.
To get the schema checked back in or to undo the checkout, execute the following SQL statement in the schema repository database. This will change the checkout owner to login.
	update master_schemas set checked_out_login = 'login' where name = 'schema-name'
Alternatively, you could log in to the User Administration tool as admin, change that user's password, and then log into the schema repo as that other user to complete or undo the checkout.

Table of Contents





Delete a UCM-enabled record type.
Updated: 12/15/08
CQ version: 7.0.1

WARNING! This ONLY applies to record types that have NOT been used with the CC integration. If it has and you complete the removal of the record type, many artifacts will be left hanging/orphaned.

There is a bug in CQ that prevents the clean removal of a record type after it has been enabled with the UCM package. You will be able to remove the record type, but the validation of the schema will fail with an error message that there is a reference to an EntityObj that cannot be found.
Perform the following steps to remove the record type.
1) Check out the schema.
2) Delete the record type by right-clicking on it.
3) Validate the schema.
4) Make a careful note of the Entity id mentioned in the error message.
5) Close the Designer.
6) Run the following SQL commands:
	> select schema_name, entitdef_id, package_dbid from packagerev_usage where schema_name = 'schema-name' and entitydef_id = entitydefid;
	> delete from packagerev_usage where schema_name = 'schema-name' and entitydef_id = entitydefid;
	> quit;
7) Open up the schema in the Designer again.
8) Check in the schema.

Table of Contents





Start an admin session in the schema repository.
Updated: 08/11/11
CQ version: 7.0.1.12

Admin sessions are used to manipulate users and groups and learn information about the database, as well as other schema repository functions. Note that admin sessions log into the schema repository, as opposed to regular sessions which log into a user database. You don't have to be an "admin" or super user to start an admin session, but admin session methods are locked down to specific user privileges.
	$adminSession = CQAdminSession::Build;
	$adminSession->Logon($login,$passwd,$dbset);
	CQAdminSession::Unbuild($adminSession);

Table of Contents





Rename a record script.
Updated: 05/02/11
In the Designer, simply right-click on the record script and select Rename.
Be sure to change any action Record Script hooks or buttons that refer to it.

Table of Contents





Ensure traceability of schema versions across schema repositories.
Updated: 08/25/11
Unfortunately, there's no programmatic way to determine which versions of a given schema came from the importintegration of schema versions from another schema repository.
However, it's an extremely good idea to place comments on schema version imports indicating what versions of the source schema are being imported. For example, if you export Development schema versions 98-100 and import them into the Test environment, during importintegration use a comment like "Dev versions 98-100". If those versions create, say, version 80 in the Test environment, when that schema version is migrated to Production, use a comment such as "Test version 80". In the Development environment, be sure write change ticket numbers into the comments upon checkin of the schema. In that way you'll have traceability from a Production schema version back to ticket numbers that were worked on in Development. It's extremely important to get into this habit.

Table of Contents





Programmatically get a list of schema repositories.
Updated: 04/16/12
Version: 7.0.1.8

use Win32::Registry;

$key = "SOFTWARE\\Rational Software\\ClearQuest";
$::HKEY_LOCAL_MACHINE->Open($key,$parameters_o);
if ( "$parameters_o" eq "" ) {
	print "ERROR: Unable to open \"HKEY_LOCAL_MACHINE\\$key\" using Win32::Registry.\n";
	exit 1;
}
$parameters_o->QueryValueEx("CurrentVersion",$type,$current_cq_version);
if ( "$current_cq_version" eq "" ) {
	print "ERROR: Unable to determine the current ClearQuest version.\n";
	exit 1;
}
$parameters_o->Close;
print "Current CQ version: $current_cq_version\n";

$key = "SOFTWARE\\Rational Software\\ClearQuest\\$current_cq_version\\Core\\Databases";
$::HKEY_LOCAL_MACHINE->Open($key,$keys_o);
if ( "$keys_o" eq "" ) {
	print "ERROR: Unable to open \"HKEY_LOCAL_MACHINE\\$key\" using Win32::Registry.\n";
	exit  1;
}
$keys_o->GetKeys(\@repo_keys);
if ( ! scalar(@repo_keys) ) {
	print "ERROR: Unable to get the list of CQ repos.\n";
	exit 1;
}
$keys_o->Close;
$cq_repos = join(",",@repo_keys);
print "CQ schema repos: $cq_repos\n";
Table of Contents



Restart a schema at version 1.
Updated: 08/07/18
Version: 9.0.1.3

The number of versions in a schema can grow to a large number over many years of development. If you need to copy that schema into a different schema repository or use it in a new database in the same schema repository, consider resetting the version number back to 1 first. You'll lose the ability to research older "versions", but all the current functionality will be there. Moreover, exports and imports of a schema go much faster the fewer versions there are.
1) In the schema repository that has the existing schema, select File -> New Schema.
2) Choose the latest version of the schema to be copied.
3) When prompted, don't create a new database or check out the schema. You'll now have a duplicate schema whose version is reset (compressed) back to version 1. You can create a database for it there or use "installutil exportschema" to copy it to another schema repo.

Table of Contents





Hide records/types from users/groups.
Updated: 08/04/10
New in CQ2001, one has the ability to hide specific records from users or groups of users. This is very useful to control record access on a company, department, or project level.
1) In the Designer, create CQ groups that align with the desired security model.
2) Open the schema and create a stateless record type that will be used to define the security context. That is, designate a record type that contains the group or user security information. For example, to restrict records based on company, create a stateless record type called "Company" whose fields are information about that company. In most bundled schemas, the "Customer" record type already exists for this purpose. This is known as the security context record type.
3) In the record type whose records are to be controlled, create a field of type REFERENCE (not REFERENCE_LIST) that points to the security context record type. While in the field's definition dialog, be sure to check the "Security Context" box. The Security Context causes a new tab called "Ratl_Security" to be added to your security context record type (the one created in step (2). That tab name can be changed without issue. You cannot reference any system record type, such as history, users, groups, or attachments because you don't have permission to add the "Ratl_Security" tab to those record types. More than one security context field can be added to the form. You'll notice that multiple groups can be added to the "Ratl_Security" tab in a security context record. If a user wanting to view a particular record belongs to any group in that list, he/she gets to see the record.
4) Add the field to the "Defect_Base" form; it probably isn't needed on the "Defect_Base_Submit" form.
5) One problem with this scenario is how one should populate the security field with a group value. The field could be left blank until a manager views the record and assigns it a value. But, that would leave the ticket open to all customers until the field got populated with a specific customer's name. Another way would be to have the field automatically populated with the user's group when the record is submitted. The downside of this is that if a user belongs to many groups, how do you decide which group to populate the security field with. One way is to simply have customer's belong to only one group with a specified format. For example, all employees of CompanyX would belong to only one group called "customer_companyx". They can belong to many groups, but any given user would only belong to one group that begins with "customer_". Then, a hook on the "Default Value" column of the security field could populate the field with the name of that one "customer_" group that they belong to. However, as a less elegant alternative, a form is allowed to have several security fields, of which each could be populated with a different group to which the user belongs.
6) If the security field is set to be populated automatically, the Behaviors for the security context field(s) should be set as READONLY for all states.
7) Create/submit the actual security context record(s) from the Client.

NOTE: The security context does not apply to any user designated as "Super User" or "Security Administrator".
As of 7.1 there is a built-in user group called "Everyone" that can be used as a default CQ group until the security context record is backfilled with the desired groups.
In addition to the actual records being locked down, the security context record itself (the record with the Ratl_Security tab) cannot be seen by users that are not members of the designated CQ group. So, if you have other users taking care of those records, make sure they get designated as Security Administrator in the User Administration Tool.
Changes in security context are not picked up dynamically by logged in users. A user must log out and then back in to pick up the changes.
Even if a user is not in the security context group, they can still create records of that type ... subject to other action permissions that might be in place.

Table of Contents





Make a field read-only.

In the CQ Designer, in Record Types -> Defect -> States and Actions -> Behaviors right-click in the state columns next to the field and select READONLY.

Table of Contents





Restrict tab read access.

Anyone with a CQ login can read any tab on any record unless that tab has been restricted to certain group(s). The tab will be invisible to those selected groups. There is no way to "hide" an individual field.
With the form open in the Designer for editing, right-click on the tab and open its Tab Properties sheet. Simply deselect the All Users check box and Add or Remove groups from the Selected Groups pane. Be sure to change the group permissions on all forms, such as Defect_Base_Submit and Defect_Base.
To add a new access group, see "Create a CQ group".

Table of Contents





Restrict write access to CQ objects.

CQ limits write access on several levels. To restrict write access to a schema, simply only give selected users the Schema Designer permission level. Actions can be restricted based on groups or hooks that in turn look at users and/or groups. Write access to a specific field is controlled with hooks.

Table of Contents





Set up user password authentication.
Out of the box, CQ has no password authentication. The authentication box comes up, but no password is required. In the CQ Designer -> Tools -> User Administration... one can manage all aspects of user information and passwords. Don't forget to set the appropriate database Subscription in the User Information page (double-click on a user) and to Upgrade user DB... (main page) if any modifications have been made. A subscription summary of a given database can be seen via the DB Subscriptions... button (main page).

Table of Contents





Restrict web access.
Updated: 06/29/11
It's possible to allow a general group of users into your system, yet severly limit their access to records (both read and edit). This is useful if you have users that need to submit bugs and enhancement requests, but are not part of applications or development teams that use the system for change management.
Setting up restricted users is as simple as editing the Site Configuration (old web) or Site Administration (new web) in the web interface and designating users by login or by CQ group name and then assigning a query that they can run. However, depending on you environment, schema modifications may be necessary.

A restricted user does not consume a license.
You usually only allow a restricted user to run a single query created by CQ admins, but you can also allow that user to have a Personal Queries folder (see below).
You usually only allow a restricted user to submit records, but you can also allow that user to modify records (see below).
Restricted user mode is limited to the web interface.

The following are configurable regarding restricted mode.
Allow Find Record: This allows the restricted user to utilize the Search utility. Note that if checked, the restricted user can find records that would not have been found by the restricted query.
Allow Modify Record: This allows the restricted user to edit any record they can find, subject to other permissions.
Allow Workspace: This gives the restricted user a Personal Queries folder, which would be editable to create custom queries. This disables the single “restricted query” defined by CQ admins.
Allow Modify User Profile: This gives the restricted user the ability to change things like their fullname, email, and phone.

Even though the restricted user has extra limitations imposed upon them simply by designating them as such in the web interface administration area, schema modifications may be necessary as well. For example, if you want the restricted users to submit only a certain type of record, ensure that the restricted user group does not have permissions on the other record types. Another example is if you have a choice list of applications when submitting a record based on membership in those applications, but you want the generic restricted users to be able to submit tickets against any application, you'll need to modify that choice list hook.

Table of Contents





Administer dynamic choice lists.
New in CQ2001, one has the ability to designate dynamic choice list administrators, who can add, delete, and/or modify entries. The dynamic list definition must still be made inside the schema by a schema designer. Users are given the "Dynamic List Administrator" privilege via Use Administration in the Designer. Once a dynamic choice list is set up, only users with that privilege will be able to modify dynamic lists.

Table of Contents





Administer public queries folder.
New in CQ2001, users can be designated such that they have the rights to add queries to the Public Queries folder. Formerly, only users with full Super User privileges had that. Users are given the "Public Folder Administrator" privilege via the ClearQuest User Administration tool.

Table of Contents





Administer security at a site.
New in CQ2001, the Security Administration privilege gives a designated member of a group the ability to see all records in a database, where other members of the same group cannot. This user cannot add new users, modify a schema, give themselves Super User privileges, nor modify another user's privileges. However, he/she can modify the "All Users/Groups Visible" privilege for other users. The "Security Administrator" privilege is set for a user via the ClearQuest User Administration tool.
This type of user can see all records when "security context" has been implemented. See Restrict access to specific records.

Table of Contents





Restrict the list of users/groups seen by users.
New in CQ2001, users can be restricted to seeing only their own logon where normally they would have seen a complete list of users/groups subscribed to that database. In the ClearQuest User Administration tool, toggle the "All Users/Groups Visible" privilege.

Table of Contents





Re/set CC/CQ integration password and database.
Version: 7.1.2
Updated: 06/20/12
The first time you use the CC/CQ integration, CQ will prompt for a password and CQ database. All subsequent uses of that integration will use that password and database without prompting. It's possible to reset the information, if the CQ user database or dbset connection has changed.
Note that this command didn't show up until version 7.0. On Unix, it usually lives in /usr/atria/bin.

From the CLI:
	crmregister add -database user-database -connection dbset -user cq-username -password cq-password
	crmregister list
On Windows you can also run regedit and manually remove "HKEY_CURRENT_USER\Software\Rational Software\ClearCase\UCMCQ_Integration\database". In 2003.06 and earlier, that registry key was "HKEY_CURRENT_USER\Software\Rational Software\ClearQuest\2003.06.00\Common\CQIntSvr".

Table of Contents





Update user/group information in a user db from an external script.
It doesn't appear that this is possible. Even though user information can be updated in the master db from an external API call, the user dbs can only be upgraded from within the Designer.

Table of Contents





Restrict access to certain records.
ClearQuest security features work by restricting user access to records based on membership to user groups. Record hiding is accomplished by placing a security context field in the record type of the records to which you want to restrict access. The security context field references a security context record containing data that determines which users can see the record. For example, in order to control which customers are allowed to see defects, you might place a field called "customer_defects" in the Defects record type and reference this field to the "Customer" record type. You would then assign user groups to each customer record, which grants these groups privileges to see defect records that refer to the customer record. Only users who are in the group list of the security context record will be able to see the controlled record(s).
The security contect field must be of type REFERENCE and not REFERENCE_LIST. A particular secured record can only reference the security information of a single stateless record and not a list of records. If securing tickets based on customer access, most out-of-the-box schemas come with a stateless record type called Customer. Unfortunately, the field called "customer" in the Defect record type is of type REFERENCE_LIST. You'll need abandon (or delete) that field and create a REFERENCE field that references the Customer stateless record type. When you create the new REFERENCE field, be sure to select the "Security Context" box in its Properties sheet.
When a field is connected to a record type for security context, the security record type will have a new tab added to it called Ratl_Security. On that tab is where you specify what groups get access.

Table of Contents





Change user information from the client.
As of CQ 2.0, users have the ability to change their own information from within the Client. However, users cannot change their login names or group affiliations; those must still be done from within the Designer.
In the Client, go to View -> Change User Profile.

Table of Contents





Create new users and groups.
Log into the CQ Designer as somebody with User Administrator or Super User privleges. Open Tools->User Administration. Once a user/group has been created, you'll need to push that user out to database(s) before that logon will be available.
As of 2003.06.00 you start the User Administration GUI from the Windows Start menu.

Table of Contents





Log into the web without manually typing in a username and password.
Updated: 08/15/11
There is no way to allow a user to log in without supplying a username and password. However, you can pre-supply the username and password for them. The need for this usually comes up when you allow absolutely anyone to submit a ticket, such as to a help desk. However, this solution only works from the web interface.
A user normally types in a URL such as "http://machine/cqweb". So that they don't have to remember (or even care about) a username and password, create a link to your CQ db using something like the following:

7.0 and later:
http://web_server/cqweb/main?command=GenerateMainFrame&service=CQ&username=login&password=password&schema=schema-repo&contextid=user-db
Note: Don't enclose the password in quotes even if the password is null, or it will fail to log in.

2003.06.13:
http://web_server/cqweb/main?USE_CASE=GO&service=CQ&schema=schema-repo&contextid=user-db&username=login&password="password"

2003.06.12 and earlier:
http://web_server/cqweb/logon/default.asp?DbSetName=schema-repo&DatabaseName=user-db&user=login&password="password"

For a custom restricted user, add the restricted user to a special CQ group. Set the Public Queries folder permissions to No Access for that group. Create a single or set of queries in the restricted user's Personal Queries folder, but then set the Personal Queries folder permission to allow read-only permission to that group.

Table of Contents





Subscribe all users to a new db.
If you add a new user, you can pick and choose which databases they will subscribe. If you add a new database, it's possible to subscribe all existing users to that db without resetting their original database subscriptions.
In User Administration, click on DB Action->Subscribe. Hold down the shift key and select all users in the upper-left pane. In the bottom, just select the new db and click Ok. Back in the original dialog, click DB Action->Upgrade.

Table of Contents





Set up electronic signatures (eSignature).
Updated: 06/15/06
As of 2003.06.13, CQ has a package called "eSignature" that can be used to "sign" a record. The package has the following features:

Table of Contents





Set up a user as strictly readonly.
Updated: 08/15/06
Unfortunately, there is no property on a user record that allows a user to be Active, but at the same time only have readonly access.
The web interface has the ability to set up a user as a restricted user. But, that means that user cannot create queries and can only use the one query defined by an administrator.
If you want to give a user the ability to log into CQ, create and run any query, chart or report, but not be able to submit nor modify any record, in the User Administration Tool, create a CQ user group, perhaps, called "READONLY". Add the specified user(s) to that group. Add the following code to a global script called "IsREADONLY".
Function IsREADONLY

	' ''''''''''''''''''''''''''
	' Users in the "READONLY" CQ group can neither submit nor modify any record.
	userGroups = GetSession.GetUserGroups
	IsREADONLY = FALSE
	if IsArray(usergroups) then
		for each group in userGroups
			if group = "READONLY" then
				IsREADONLY = TRUE
				exit function
			end if
		next
	end if

End Function
Then, call that global script from the Access Control hook of every action that you want to prevent those users from executing. In this example, it is the "Defect" record type.
	' ''''''''''''''''''''''''''''''''''
	' Users in the "READONLY" CQ group can neither submit nor modify any record.
	if IsREADONLY = FALSE then
		Defect_AccessControl = TRUE
	else
		Defect_AccessControl = FALSE
	end if
Table of Contents



Add/remove users to/from groups.
Updated: 09/12/06
Log into the User Administration Tool.
For a single user, double-click on the user. In the box called "Groups", simply check the box next to the group to which the user is to belong. Click OK.
For multiple users, on the main User Administation page, right-click on the desired group in the Groups box and choose Edit Group. In the resulting pop up, multi-select groups and/or users and click the Add button.
When users have been added to groups, remember to push the changes to the user databases by selecting Upgrade under DB Action menu in the main window. Note that when changes are made to a group's membership, users that are currently logged in will not see the change until they next log in.

Table of Contents





Restrict queries, charts, and reports to groups.
Updated: 08/15/11
Version: 7.0.1.12
You need the Public Folder Administrator or Security Administrator privilege to change the permissions on public queries. Query permissions are applied in the user database. The permissions are set on the parent folder. So, if you want to secure a query or report, place it inside a folder and secure the folder, which then also prevents other users from even seeing the query, subject to the permission level applied.
Simply right-click on the folder and select Permissions. For more information, see the admin manual "Workspace folder permissions". The manual only refers to public folders, but is applicable to your own Personal Queries folder as well, which may be useful if setting up a restricted user access such that the restricted user cannot change their own Personal Queries contents. These permissions don't apply to a user with the Security Administrator or Public Folder Administrator privilege.
To lock a folder down to a single group, highlight all the groups in the right-hand pane, then right-click on the set and choose Permission -> permission level.
If a group name changes, the folder permissions automatically pick up on the new group name.
Also, independent of the folder permissions, you restrict who can change the permissions themselves. This setting is automatically applied to all subfolders and not changeable at the subfolder level.

Permission precedence: from highest to lowest. Precedence is used to determine effective permissions when there are multiple groups with different permissions acting on the same folder. For example, if you are a member of group A and group B, and group A has read-write permissions and group B has read-limited permissions, you will be granted read-limited permissions. It seems to me that read-write should have a higher precedence than read-limited, but that's the way it is.
Read-limited: Users can see the contents of the current folder, but not subfolders, unless they have explicit permission to those folders as well.
Read-write: Users have full control of the folder.
Read-only: Users can run the queries, but not modify them or create subfolders.
No-Access: Users cannot see the contents of the folder.

Note that as with other query changes, the changes are only picked up when a user next logs into the user database.
Note that without implementing security context fields, users prevented from seeing the locked down query folder can still simply create a query of their own. That is, this isn't a method of preventing users from seeing another team's records.

Table of Contents





Create a new state.
These steps for the older, VB-style designer.
1) Open the schema to be modified.
2) Record Types -> Defect -> States and Actions -> State Transition Matrix.
3) Edit -> Add State -> Name.
4) Open the Actions matrix. Create a new state action if necessary, or simply use existing actions to connect your new state to existing states. Each state needs at least two CHANGE_STATE actions associated with it.
5) File -> Test Work. When testing is complete. Check in the modified schema and upgrade the real database if desired.

Table of Contents





Create a new state action.
Actions are used when any modification is made to an existing record.
1) Open the schema to be modified.
2) Record Types -> Defect -> States and Actions -> Actions.
3) Edit -> Add Action... -> General tab -> Action Name.
4) State tab: add source and destination states for this action.
5) Test the modification.

Table of Contents





Set a default action for a state.
With the schema open for edit in the Designer, enter the State Transition Matrix. Enter the Properties sheet of state and go to the Default Action tab. The default action will be in bold and listed first among the actions.

Table of Contents





Set the order in which the actions are listed.
Beyond setting a default action for a given state, there is no way to control the order of the rest of the actions.

Table of Contents





Disallow an action for certain states.
If you don't want an action to be available from a certain state, ensure it doesn't show up as an action in the State Transition Matrix in the CQ Designer.
However, if the action is a generic one, such as Modify, it won't show up in the State Transition Matrix. Enter similar code in the action's Access Control hook. This example is in the Modify action Access Control hook.
	# Tickets cannot be modified when Closed or Duplicate.
	$state = $entity->LookupStateName();
	if ( "$state" =~ /^Closed$|^Duplicate$/ ) {
		$result = 0;
	} else {
		$result = 1;
	}
Table of Contents



Remove a state.
States can be removed as well as added. However, when removing a state, be sure address any actions that depend on that state, fields specifically associated with that state, queries that include that state, documentation that my contain the state flow diagram, etc...
A project may want to drop the state called Duplicate, as they may consider that to be a form of Closed. However, the Duplicate and Unduplicate actions are tied to the state called Duplicate and don't work if the state is removed. It would take quite a bit of work to reproduce that functionality in a different state. I've done it before, but it adds too much hook code. So, if you want the Duplicate functionality, but are thinking of not having it as its own state, I would advise against it.
To remove a state, simply right-click on it in the Designer and choose Delete.

Table of Contents





Nested actions.
Updated: 12/19/08
CQ version: 7.0.1

A nested action is any action started when an action is already in progress. Nested actions can be started only when a hook calls the BuildEntity or EditEntity methods of the Session object. Some actions can be both a primary action (initiated directly by the user) and a nested action (initiated by a hook).
Note: Nested actions trigger all base actions for that record type, just as primary actions do.
Nested actions differ from primary actions in that action access control hooks and notification hooks are not executed for nested actions. The Action Access Control hook is not run if a hook starts a nested action. Because all hooks execute with the SuperUser privilege, the privilege level is already at its highest (SuperUser). There is no need to run the access control hook for the nested action. Access for a nested action is also granted when no access control hook is fired. Notification hooks do not execute for a nested action, by default. Notification hooks are used to send an e-mail. Having each nested action send an e-mail would result in many e-mails sent for what the user considers to be one action. You can override this behavior and allow nested actions to execute notification hooks by setting the CQHookExecute session variable to a value of 1.

Table of Contents





BASE actions.
Updated: 12/19/08
CQ version: 7.0.1

A BASE action is a secondary action that is triggered by a primary or top-level action. A BASE action hook is automatically triggered by every other action (such as Nested actions, initialization, access control, validation, and commit) for that record type. BASE actions allow an action hook to be written once and then re-used with multiple actions. For example, writing a BASE action and adding a notification hook to send an e-mail causes an e-mail to be sent when any action is performed on the record. Each step of an action (initialization, access control, validation, commit, and notification) executes the hooks of all BASE actions for that record type, followed by the hook for the main action itself. A BASE action cannot be initiated directly by a user, so it is not displayed in the list of possible actions presented to the user in the Actions menu. There can be multiple BASE actions for a record type. Some BASE actions can be added to a schema by the application of a package.
Note: Multiple BASE actions do not run in a specific order, but always precede the main action that triggered them.
Note: Any access control restrictions placed in base actions apply to all other actions.

Table of Contents





View the State Transition Matrix or State Section.
Updated: 07/02/12
CQ version: 7.1

In the new Eclipse-based designer, expand schema-name, expand version, expand Record Types, expand record type, expand States and Actions, expand States, and double-click on any state.
In the older VB-based designer, expand Record Types, expand record type, expand States and Actions, and double-click on State Transition Matrix.

Table of Contents





Users & groups.
WARNING: Only edit user information when users are not likely to be using CQ. If you have the Designer open and are editing user information and a user changes his/her password via the client or web, the user database will be updated, but the master database will not. When you finally go to push the new user information to the user database, you will get an error stating that the two databases' user information are out of sync. You have no choice but to abandon your user information work, close the Designer and log back in to resynchronize the two.

NOTE: The CQ User Administration tool does not push inactive users to a database when doing an "Upgrade".

CQ has four types of users:
1) Active User
- Has CQ Web and Client logon priveleges.
- Can change own password, name, email and/or phone.
- Cannot change group or subscription info even for self.
- Can view schemas, schema info and database info.
2) Schema Designer
- In CQ, can modify contents of the Public Queries folder.
- Can change schemas and upgrade databases.
- Cannot create or delete databases.
- Cannot edit user information other than what any Active User can do.
3) User Administrator
- Can edit all user, groups and subscription information.
- Can only grant/revoke permissions that s/he has.
4) Super User
- Has all CQ permissions, including both Schema Designer and User Administrator permissions.
- Can create and delete schemas and user databases.

Table of Contents





Create a CQ group.
Updated: 01/03/12
Version: 7.0.1.8
In the CQ Designer Tools -> User Administration... -> Edit Group... -> New Group.
To add users to the new group, ensure your new group is selected in the User Group pane, select a user in the Users pane and click on Add. To create new users, see "Create a CQ user".
Once the group is complete, associate it with database(s) via the Group Subscription button at the bottom. The default is all databases.
Once satisfied with all the changes, click the "Upgrade user DB..." button.
Note that group names can only contain numbers, letters, and underscores, and must be between 1 and 30 characters.

Table of Contents





Create a CQ user.
In the CQ Designer Tools -> User Administration... -> Add... and fill in all relevant information. See users & groups for a description of the Permission levels. Even though the Permission check boxes are independent, all users need at least Active User independent of what other boxes are selected.
By default, the user is automatically given access to all databases. To limit database access, select OK to leave the User Information screen. Highlight the new user and click the Subscription button to see a list of databases for that user.
To add the user to specific groups, go to Edit Ggroup. To simply view current subscriptions, click the DB Subscriptions button at the bottom.
Once satisfied with all the changes, click the "Upgrade user DB..." button.

Table of Contents





Import user information.
Updated: 08/29/11
In CQ, one has the ability to import users en masse from a file. Open the CQ Designer without any schemas open for edit. Go to Tools -> User Administration and choose on the Utilitis->Import. As of CQ 2005, the User Administration tool can be started as a stand-alone tool from the Start menu.
Simply supply the filename to import. The file can also be generated by exporting from a different database. If any of the newly imported users show up wearing red shirts with an X across their heads, it means that you need to double-click on them and give appropriate permissions. At a minimum, a user needs to have Active User permissions. As can be seen below, those permission levels can be set in the input file. Remember to upgrade the user databases when done.
Note that the person performing the login cannot have any entry in the import file. You can't edit your own login information. This restriction doesn't seem to apply to the "admin" user.
The following is sample input file format:
USER vobadm
    password        = mdabov
    is_active       = FALSE
    email           =
    fullname        = vobadm
    phone           =
    misc_info       = Exported from DDTS
    is_superuser    = FALSE
    is_appbuilder   = FALSE
    is_user_maint   = FALSE
    databases       =

USER ejo
...

GROUP Test_Managers
    is_active       = TRUE
    is_subscribed_to_all_dbs = TRUE
    members         = user1 eostrand
    subgroups       = 
    databases       = 

...
Table of Contents



Set up groups within groups.
CQ allows you to set up group hierarchies. In the CQ Designer under Tools -> User Administration, click on Edit Group. If the group you'd like to see does not already exist, simply click on New Group. Once the groups exist with which you'd like to work, simply drag and drop a copy of the group into place in a hierarchy.

Table of Contents





Add a user not mastered locally to a group.
Users not mastered locally can be added to a group that is mastered locally. Once the user is added to the group, sync with the working schema site. Once the packet is imported at the working schema site, upgrade the user databases and synchronize with all sites.

Table of Contents





Delete a user.
Updated: 05/14/07
Users cannot be deleted using the CQ User Admininstration tool. Under normal conditions users are not deleted, but merely deactivated. DO NOT perform these steps unless it is absolutely necessary to delete the user record.
If a user entry must be deleted from a database, a DBA can perform the following steps:
1) IMPORTANT: Ensure the user is not referenced by ANY records. You can de-reference the user by modifying the referencing records from the CQ interface. If not, you must remove the entries in the parent_child_links table and the referencing record's table.
2) In the CQ User Administration tool, ensure the user is not a member of any groups. Push the changes to the user databases, if any changes were made.
3) In ALL user databases, remove the row from the "users" table.
4) In the MASTR database, remove the row from the "master_users" table.

Table of Contents





Unsubscribe a user.
Updated: 07/12/07
In addition to creating a user login, that user will not be able to access a given database unless they are subscribed to that database. In the User Administration Tool, a user can be subscribed to all database, present and future, or subscribed to specific databases.
If you don't want a user to access a database any more, you can make that user Inactive, or unsubscribe the user from the database.

NOTE: Unsubscribing a user from a database does not delete the user's login from that user database's "users" table. That is, once a user exists in a user database, it will always exist in that database unless you delete the user .

Table of Contents





Determine a user's groups.
Updated: 04/07/09
$userGroups = $sessionObj->GetUserGroups;
if ( ! @$userGroups ) {
	print "The current user does not belong to any groups.\n";
} else {
	foreach $group (@$userGroups) {
		print "$group\n";
	}
}
NOTE: This API call only returns active groups. If you want to know all the groups to which the user belongs, a query will need to be created.

Table of Contents





Determine a group's users.
Updated: 04/07/09
The following hook query returns a list of users belonging to a specified group. Because this is useful in many situations, it should be created as a subroutine in the Global Scripts. It filters out inactive group members.
NOTE: This is easier done with an admin session object creating a group object, but you don't want to create a new session within a session for performance reasons.
# Usage:
#   $group_name = "Team1";
#   @members    = GetActiveGroupMembers($group_name);
sub GetActiveGroupMembers {
	my @group   = $_[0];
	my @members;
	my @active_state = 1;
	my $session = $entity->GetSession;

	# Start building a query of the users.
	my $queryDefObj = $session->BuildQuery("users");

	# Have the query return the desired field for the user object(s).
	$queryDefObj->BuildField("login_name");

	# Filter for active members of $group.
	my $filterOp = $queryDefObj->BuildFilterOperator($CQPerlExt::CQ_BOOL_OP_AND);
	$filterOp->BuildFilter("groups.name",$CQPerlExt::CQ_COMP_OP_EQ,\@group);
	$filterOp->BuildFilter("is_active",  $CQPerlExt::CQ_COMP_OP_EQ,\@active_state);

	# Run the query.
	my $resultSetObj = $session->BuildResultSet($queryDefObj);
	$resultSetObj->Execute;

	# Add each value in the returned column to the choices array.
	while ($resultSetObj->MoveNext == $CQPerlExt::CQ_SUCCESS) {
		push(@members,$resultSetObj->GetColumnValue(1));
	}
	return @members;
}
An alternative and far less efficient method is to start an AdminSession. However, due to the performance consequences of opening Adminsessions inside hooks, you shouldn't use this. I put it here as an Adminsession example only.
my $adminSessionObj = CQPerlExt::CQAdminSession_Build();
$adminSessionObj->Logon("admin","boeing","");
my $groupsObj  = $adminSessionObj->GetGroups;
my $CMgroupObj = $groupsObj->ItemByName("CM");
my $usersObj   = $CMgroupObj->GetUsers;
my $nusers     = $usersObj->Count;
my $x;
my $userName;
my $userObj;
for ($x = 0; $x < $nusers; $x++) {
    $userObj  = $usersObj->Item($x);
    $userName = $userObj->GetName();
    push(@choices,$userName);
}
CQAdminSession::Unbuild($adminSessionObj);
return @choices;
Table of Contents



Create a new user with an admin session.
Updated: 08/09/11
Version: 7.0.1.12
CQ users can be added to the system programmatically. Note that UpgradeMasterUserInfo updates all user information for all users for the specified $dbObj. It can be very slow for a large number of users. Unfortunately, the UpdateInfo call, which only updates the $userObj in question, does not work for new users, nor does it update group information. The login must at least have the User Administration privilege.
	$adminSession = CQAdminSession::Build;
	$adminSession->Logon($login,$password,$dbset);

	$userObj = $adminSession->CreateUser($lan_id);
	$userObj->SetLDAPAuthentication($lan_id);
	$userObj->SetActive("1");
	$userObj->SetSubscribedToAllDatabases("1");
	$userObj->SetFullName("$fullname");
	$userObj->SetPhone("$phone");
	$userObj->SetEmail("$email");
	$userObj->SetMiscInfo("$misc_info");

	$groupObj = $adminSession->GetGroup($group);
	$groupObj->AddUser($userObj);

	$dblist = $userObj->GetSubscribedDatabases();
	for ( $x = 0; $x < $dblist->Count; $x++ ) {
		$db	= $dblist->Item($x);
		$dbObj	= $adminSession->GetDatabase($db);
		$dbObj->UpgradeMasterUserInfo();
	}

	CQAdminSession::Unbuild($adminSession);
Table of Contents



Copy missing users from another database.
Updated: 08/30/11
Version: 7.0.1.12
To backfill a user database with user information from another database, you only need to go into the ClearQuest User Administration tool for the source schema repository and select Utilities -> Export. Then, in the ClearQuest User Administration tool for destination schema repo, select Utilities -> Import.
When doing it this way, it will update the user information, including passwords and privileges, for all users and groups. This may not be desired, as there may be privilege, database subscription, and/or password differences that are specifically different between, say, a production and test environment.
However, there are times when you want to backfill missing users into the test environment, because the production environment is where user information has presumably been kept up to date. Unfortunately, the ClearQuest User Administration tool doesn't provide the abililty to just import the missing users. To do that, you'll need to parse the userinfo.txt file that was output from the Export activity. The following script will perform that parsing. This isn't a standalone script, but does show the critical parts.
######################
# Retrieve the contents of the userinfo file.
open(INFILE,"$infile") || die "$script: Unable to open \"$infile\" for read.\n";
@indata = <INFILE>;
close(INFILE);

open(OUTFILE,"> missing_userinfo.txt") || die "$script: Unable to open \"missing_userinfo.txt\" for write.\n";


######################
print "Logging into th $dbset schema repo ...\n";
eval {
	$adminSession = CQAdminSession::Build;
	$adminSession->Logon($login,$passwd,$dbset);
};
if ( $@ ) {
	print STDERR "\nUnable to log into $dbset.\n$@\n";
	exit 1;
}


######################
# Build a list of users and groups in the destination schema repo.
$usersObj	= $adminSession->GetUsers();
$n_users	= $usersObj->Count();
for ( $u = 0; $u < $n_users; $u++ ) {
	$userObj = $usersObj->Item($u);
	push(@existing_logins,$userObj->GetName());
}

$groupsObj	= $adminSession->GetGroups();
$n_groups	= $groupsObj->Count();
for ( $u = 0; $u < $n_groups; $u++ ) {
	$groupObj = $groupsObj->Item($u);
	push(@existing_groups,$groupObj->GetName());
}


######################
# Loop through the input contents and generate the output file.
foreach $row (@indata) {

	chomp($row);

	# Skip commented lines.
	if ( "$row" =~ /^#/ ) {
		next;
	}

	if ( "$row" =~ /^USER / ) {
		($login = $row) =~ s/USER //;
		if ( grep(/^$login$/,@existing_logins) ) {
			$printing = 0;
		} else {
			$printing = 1;
			print OUTFILE "\n$row\n";
		}
		next;
	}

	if ( "$row" =~ /^GROUP / ) {
		($group = $row) =~ s/GROUP //;
		$create_it = 1;
		foreach $existing_group (@existing_groups) {
			if ( "$group" eq "$existing_group" ) {
				$create_it = 0;
				last;
			}
		}
		if ( $create_it ) {
			$printing = 1;
			print OUTFILE "\n$row\n";
		} else {
			$printing = 0;
		}
		next;
	}

	if ( $printing ) {
		print OUTFILE "$row\n";
	}
}
Table of Contents



Change a user's login_name.
Updated: 09/15/11
Version: 7.0.1.12
To change a user's login name, log into the ClearQuest User Administration tool as a user administrator other than yourself. You can't change your own login.
Simply type in the new name and push the change to the user database(s).
If the user is not authenticating against LDAP, you'll have to reset the password.
Note that if the change is simply a case-sensitivity change and you're using a backend database like DB2, the change won't work. That is, if you merely change a letter from lower case to upper case, after you click OK, go back and look at the change and you'll see that it hasn't changed. The solution is to change the login to some bogus string, click OK, go back to the user and type in the correct new login.
If login_names are utilized in the schema in fields that are not reference type fields, you'll have to run query to find where that user has been inserted into a record and perform a backfill of the new value. If there are many, many records, the easiest way to do it is to generate file appropriate for use with the ClearQuest Import Tool and use that for the bulk backfill.

Table of Contents





Determine a user record's mastership with the API.
Updated: 10/02/12
Version: 7.1.2
The API manual lists several calls to get user information once you have a user object. However, it doesn't show an API call to get the mastership.
The mastership check is actually part of the entity set of API calls: SiteHasMastership. The call will return a 1 if mastered locally or a 0 if not. If you have the user object:
	if ( $user_o->SiteHasMastership() ) {
		...
Table of Contents



Determine a user's authentication mode.
Updated: 04/12/13
Version: 7.1.2
Log into the ClearQuest User Administration tool. Any user with an active login for that database can log in, but cannot make any changes without one of the proper privileges. Once logged in, double-click on the desired user to bring up the User Properties. A check box in the lower left will tell you if the user is LDAP authenticated or not.

To get information programmatically using the API:
	$user_o = $adminSession_o->GetUser("login_name");
	$mode	= $user_o->GetAuthenticationMode();
	if ( $mode == $CQPerlExt::CQ_LDAP_AUTHENTICATION ) {
		print "LDAP Authenticated";
	}
	if ( $mode == $CQPerlExt::CQ_CQ_AUTHENTICATION ) {
		print "CQ Authenticated"; }
	}
Unfortunately, as you can see, to get the information you need to start an admin session. That is, you can't programmatically determine a user's authentication mode from a regular user database session.

A third option is to access the ClearQuest User Administration tool and export the user information. On the right-hand side, select Utilities -> Export. Read the resulting export file with a normal text editor, such as Wordpad (Notepad doesn't handle the line terminations properly). Each USER entry will have an entry called "authentication". Unfortunately, there's no way to programmatically perform that export.

Table of Contents





Determine a user's authentication mode.
Updated: 08/07/18
Version: 9.0.1.3
In the User Administration tool, one you've update the user and group information as desired, click on DB Action -> Upgrade.
There is no harm in pushing user information to a database during the day while users are logged in.

Table of Contents





Use SQL Anywhere with CQ web.
As of CQ2001A, in addition to the instructions in the install manual, the following must be done as well to access CQ via the web if using SQL Anywhere for the schema repository.
The manual has you give Full Control to the anonymous user for the ClearQuest portion of the HKEY_LOCAL_MACHINE and HKEY_USERS hives. Not in the manual is the fact that for this configuration, you also need to give the anonymous user Full Control of the HKEY_LOCAL_MACHINE->SOFTWARE->Sybase->SQLAnywhere registry key on the SQL Anywhere server. The anonymous user must be a domain account if SQL Anyhere is not installed on the web server along with CQ.

Table of Contents





Restart web services.
Updated: 08/22/11
Web services must stopped and restarted in the following order. This is applicable to the new Rational Web Platform introduced in 2003.06.13.
1) Log onto the web server as an administrator.
2) Go to Start -> Control Panel -> Administrative Tools -> Services
3) Stop "Rational ClearQuest Registry Server". This will ask you to shut down "Rational ClearQuest Request Manager" as well. Answer yes.
4) Stop "Rational Web Platform, HTTP server".
5) Restart "Rational Web Platform, servlet engine".
5) Start "Rational Web Platform, HTTP server".
6) Start "Rational ClearQuest Request Manager", which will automatically start "Rational ClearQuest Registry Server".
This can also be accomplished by running “...\Common\rwp\bin\rwp_restart.bat”.

Do the following for the Eclipse web interface introduced in 7.1.
1) Log onto the web server as an administrator.
2) Right-click on My Computer and select Services.
3) Stop "IBM HTTP Server ...".
4) Stop "IBM WebSphere Application Server ..."
5) Go to Windows Task Manager -> Processes.
6) End and java.exe processes.
7) Start the services from steps (3) and (4).

See also http://publib.boulder.ibm.com/infocenter/cqhelp/v7r0m0/index.jsp?topic=/com.ibm.rational.clearquest.webadmin.doc/c_start_and_stop_cq_web.htm

Table of Contents





Turn on Java web tracing.
Updated: 02/24/06
This tracing is only applicable to the new CQ web. To debug the CQ Java Web you will need to apply a registry key to the Windows computer that is running the Java Web. This is a sample registry key:
REGEDIT4 

[HKEY_USERS\.DEFAULT\Software\Rational Software\ClearQuest\Diagnostic] 
"Trace"="API=2;EMAIL;EDIT;SQL;JNIREG=1;LICENSE;HOOKS;PERL;VBASIC=1;THROW;DB_CONNECT=2;TIMER;SESSION;" 
"Report"="MESSAGE_INFO=0X409" 
"Output"="C:\\\\temp\\\\cqtrace.txt"

This is a list of all the valid trace keys:

API 
CHARTS 
CODEPAGE 
DB_CONNECT 
DBDESC 
EDIT 
EMAIL 
EMAIL_VB 
HOOKS 
JNIREG 
LICENSE 
MAINS 
METADATA_INIT 
MULTISITE 
ODS 
PACKAGES 
PERL 
RESULTSET 
SESSION 
SYSTEM_UPGRADE 
THREAD 
THROW 
TIMER 
USER_ADMIN 
VBASIC 

Steps for Applying the key:

1) Apply the registry key.
2) Restart the Rational ClearQuest Request Manager. You will have to restart this process every time you modify the registry.
3) Recreate the problem.
4) Zip and send a copy of "C:\temp\cqtrace.txt" to technical support.
5) Remove the debugging settings by creating a registry key with blank entries for "Trace", "Report" and "Output".
6) Restart the Rational ClearQuest Request Manager.

Table of Contents





Load balance request managers on multiple web servers.
Updated: 02/24/06
The following will set up request manager load balancing between two web servers running new CQ web. In the configuration, one of the web servers is still the primary web server, but is using the request manager on two machines to talk to the database.

NOTE: While this configuration will help improve performance, you still only have a single web server. If that server goes down or the request manager on either manager hangs for some reason, the whole CQ website will stop functioning. A much more robust configuration is to not use Rational request manager load balancing and place both web servers behind an F5 Load Balancer. In that configuration, you have truly redundant web servers. If one goes down, the other isn't affected.

Configure SVR-01 and SVR-02 to each run a request manager service.
1) On SVR-01 and SVR-02 edit the following file: C:\Program Files\Rational\ClearQuest\cqweb\cqserver\config\jtl.properties
2) Change the following line to read: JTLRMIREGISTRYSERVERS=SVR-01:1130,SVR-02:1130
3) Save the properties file.
4) On SVR-01 edit the following file: C:\Program Files\Common\rwp\webapps\cqweb\WEB-INF\classes\jtl.properties
5) Change the following line to read: JTLRMIREGISTRYSERVERS=SVR-01:1130,SVR-02:1130
6) Save the properties file.
7) Restart web services on both SVR-01 and SVR-02.

Table of Contents





Get web server status.
Updated: 02/24/06
In the new CQ web, you can get a brief synopsis of the status of a web server using the following URL:

//web-server/server-status

Table of Contents





Rename java.exe web services.
Updated: 02/24/06
The actual service names for the web services in the new CQ web are, unfortunately, all named java.exe. For monitoring purposes, this makes it difficult to distinguish which one is which. See the section called "Renaming ClearQuest Java Processes on Windows for Easier Monitoring" in the following document: http://www-106.ibm.com/developerworks/rational/library/5503.html

Table of Contents





Performance tune new CQ web.
Updated: 02/24/06
The following is a document to help improve performance in new CQ web: http://www-106.ibm.com/developerworks/rational/library/5503.html

Table of Contents





Log into a test database via the web.
Updated: 02/24/06
In the Client interface, when logging in, the pulldown menu of user databases will only show databases designated as production. To log into a test database, you can simply type in the logical name. However, in the web, once logged in, the only databases you can choose are those that are designated as production. You cannot type in the name of a test database, unless you add "?test=1" to the URL. If that is on the URL when you log in, there won't be a pulldown menu of databases, in the upper-left, but rather a simply text box in which you can type the logical name of a test or production database.

server/cqweb/login?test=1

Table of Contents





Create a URL for a record.
Updated: 01/18/13
Web links (URLs) can be sent in emails, included in record fields, etc..
7.0 and later:
http://web-server/cqweb/main?command=GenerateMainFrame&service=CQ&schema=schema-repo&contextid=user-db&entityID=dbid&entityDefName=record-type
- or -
http://web-server/cqweb/restapi/schema-repo/user-db/RECORD/record-id?format=HTML&noframes=true&recordType=record-type

Note that older version of CQ had different URL formats.

Table of Contents


ejostrander@cox.net
Return to the home page .

This page last modified: 05/28/2020