OzWeb
OzWeb supports hypercode and other cross-linked formal and informal documentation, including text, images, audio/video, etc., that may be distributed among multiple repositories geographically dispersed across the Internet and/or organizational intranet. Project-specific data and services are added on top of World Wide Web entities. Multiple users are grouped together into collaborative teams. OzWeb provides software developers with:
Publications.
Gail E. Kaiser, Stephen E. Dossick, Wenyu Jiang, Jack Jingshuang Yang and Sonny Xi Ye, WWW-based Collaboration Environments with Distributed Tool Services, to appear in World Wide Web, Baltzer Science Publishers.Gail E. Kaiser, Stephen E. Dossick, Wenyu Jiang and Jack Jingshuang Yang, An Architecture for WWW-based Hypercode Environments, in 1997 International Conference on Software Engineering: Pulling Together, May 1997, pp. 3-12.
Availability.
OzWeb has been retired
Pern
Objective
Columbia University is developing and packaging technologies intended to reduce the time and costs of maintaining large legacy software systems and increase the efficiency and quality of changes to those systems. Columbia produces frameworks, middleware and components that can be combined with other EDCS results and/or COTS products in software development environments and tools. Their focus is on facilities that help software designers, developers, maintainers, users, their managers and other stakeholders to efficiently find, organize, analyze, synthesize and exploit the design rationale and other information they need in large, heterogeneous, disconnected repositories of formal and informal materials describing complex software systems and their development processes. Columbia is particularly concerned with intra-team and inter-team collaboration services, process/workflow, and information management. Their prototype systems enable users to continually customize and (re)configure the group production information spaces of software development environments to optimize them to the software requirements and evolutionary trajectory of immediate concern. Their approach supports fine-grained, frequent, incremental interactions among the individuals and teams participating in large-scale long-lived software engineering projects that may be geographically and temporally dispersed across multiple autonomous organizations.
Approach
Prof. Kaiser has investigated software process modeling and enactment since 1986, initially in the Marvel project. In the early to mid-1990s, her lab introduced cross-organizational processes operating over the Internet, in Oz and OzWeb. Oz enabled the software development team and other stakeholders to be geographically, temporally and/or organizationally dispersed. OzWeb added integration of Web and other external information resources whereas Oz and Marvel had assumed all project materials to reside in their native objectbases. OzWeb’s plugin services and tools were accessible via conventional Web browsers, HTTP proxies and Java GUIs, improving dramatically on Marvel’s and Oz’s X11 Windows XView/Motif user interface clients. The successive prototype frameworks were used on a daily basis in-house to maintain, deploy and monitor their own components and APIs.
The new process technology now under development at Columbia is broadly based on this decade+ of research on and experimentation with architecting and using software development processes targeted to Internet/Web middleware and applications, but reflects a major departure from previous directions. In particular, current process and workflow systems are often too rigid for open-ended creative intellectual work, unable to rapidly adapt either the models or the enactment to situational context and/or user role. On the other hand, the process/workflow ideal implies a flexible mechanism for composition and coordination of information system components as well as human participants.
Mobile workflow agents, called worklets, address both the problems and the promise: Worklets might be constructed or parametrized on the fly by a human or a program, then transmitted from component host to host through a “meta-workflow” – a dynamically determined routing pattern reactive to the latest host’s circumstances and surroundings as well as past and planned trajectories. Workflow typically involves actions performed on data, or perhaps interactions among humans concerned with implicit data “resident” in the humans’ memories. But here the “work” focuses on (re)customizing the host’s configuration – loosely construed, including, e.g., schemata, lock tables, authorization capabilities, event subscriptions, even host machine registry. And, as its name implies, worklets can update the process model(s) of a workflow management system. In the degenerate case of the usual data, a worklet is simply a workflow snippet whose semantics are dependent on the host’s interpretation of its directives. [Note that by host is meant a particular information system component, not necessarily the entire machine or operating system platform.]
Each worklet is a small scripted program, like the various web agents a combination mobile agent and smart RPC, but in this case potentially including workflow-like rules, as well as imperative code for host-context exploration/instantiation. When worklets manipulate the configuration model(s) of a middleware service or a complex document, the level of dynamism is limited only by the capabilities of the host (as is or wrapped). For example, in the case where the host is a database management system and the worklet initiates changes to its schema, that “(re)configuration” might immediately evolve all data, upgrade data as it happens to be accessed, apply only to new data, or become effective only after a long off-line process, depending on the database system’s innate functionality. However, the “configuration” implied by the database’s contents could usually be modified on the fly as worklets arrive or the triggering conditions of already-local worklets become satisfied.
As another example, worklets might define part of or modify the workflow definition being enacted by a conventional workflow management tool, inserting their bodies into the model or matching against existing tasks to be adapted or removed. Whether or not a newly modified process model applies to any in-progress process steps, the current or following spiral iteration, or only to the “next” instance is generally limited by the capabilities of the base workflow management system. Unless, of course, the worklet enacts a workflow fragment on its own, which is where the greatest performance/functionality gains and flexibility can be achieved. Any part of an intelligent document could be treated as a configuration model to be upgraded by the worklet, e.g., to tailor and install its components in a distributed enterprise setting.
A host-specific worklet adaptor must be constructed for each anticipated host system or component, and is attached to that host. Obviously, construction of such adaptors is plausible only if the host provides an API or extension language, can reasonably be wrapped, or if its source code is available and the adaptor builder is willing and able to plunge into it. Generally, the adaptor builder must have expert-level understanding of the host and the capabilities it exports. However, the worklet writer should have no need to understand any particular host, other than the generic category of potential hosts (e.g., workflow automation tools) likely to receive the worklet.
Prof. Kaiser’s Oz process-centered software development environment framework was perfectly poised to exploit the emergence of the World Wide Web in the mid-1990’s for additional reasons beyond the opportunity for mobile workflow. Her lab’s proof-of-concept realization of OzWeb added a new kind of built-in object base class, WebObject, to the native object management system. In addition to directly storing the object content, WebObjects also contained a URL pointing to that content’s “home” at any website on the Internet (or intranet). The local content was treated as a cache, with the remote website queried via HTTP conditional GET – which retrieves the web entity only if it has changed more recently than the cached copy. Users could access WebObjects either through the native X11 Windows client originally constructed for Oz, or through any web browser configured to use their HTTP proxy.
When the browser requested a URL that matched a WebObject, it was retrieved from the OzWeb server along with added-on HTML showing the attributes, relationships, etc. imposed on the entity within OzWeb. But when the browser requested any other URL, not currently known to OzWeb, the proxy forwarded the request to the appropriate external website. In this case the user interface only added on a frame giving the user the option of immediately adding that web entity to the OzWeb objectbase. OzWeb also supported HTTP PUT, for updating backend websites containing in-progress project materials.
Although sufficient to support her lab’s own software development, this approach didn’t scale very well as they attempted to add on other kinds of Internet and proprietary protocols, besides WebObjects/HTTP. This is not very surprising: the OzWeb code was essentially legacy code that had far outlived its origins in Prof. Kaiser’s 1986 Marvel design. Its over 300k source code lines had been added to or modified by about fifty students, included some code written a decade earlier, and was still based on the mid-1980’s Unix/C model. OzWeb was ready to retire. Prof. Kaiser’s lab started over again, with a new design and architecture, coding in Java, and targeting the Windows NT platform – to produce Xanth. They also further componentized the old OzWeb facilities, which had been in progress since the later versions of Marvel, with all the new components also written in Java. For instance, the original Pern transaction manager component was redesigned and reimplemented from scratch as JPernLite. The Rivendell tool service was integrated as a mandatory component of University of California at Irvine’s Chimera, to launch its viewers.
Xanth neatly partitioned data access modules (DAMs) for accessing arbitrary backend data sources through their native protocols, presentation access modules (PAMs) for appearing to arbitrary front-end user interface and tool clients as their native servers, and service access modules (SAMs) for inserting hyperlinking, annotation, user authorization, workflow, transaction management, etc. services wrapped around PAM and DAM operations. The SAMs were connected to each other and the DAMs and PAMs via a novel event bus, called the Groupspace Controller, which not only propagated notification events but also supported request events that could be vetoed by any service so registered. Veto capability is needed to realize workflow constraints, transaction all-or-nothing guarantees, etc. The conventional event notification after the fact of a prohibited activity is obviously too late. Many events (e.g., sending email, printing) simply cannot be undone or fully compensated, and those that can incur substantial overhead that is unnecessary if the architecture had allowed for them to be prevented in the first place.
Xanth enabled reimplement OzWeb effectively and efficiently, in about 50k lines of Java code, through a fully scalable architecture. Columbia then easily incorporated a variety of backend data sources like CVS source code repositories, NNTP newsgroups, Ical group calendar managers, and so on. Prof. Kaiser’s lab also developed a variety of Web-oriented user interfaces for Xanth, moving away from relatively limited HTML to try browser-resident applets and host-installed apps, as well as legacy clients, e.g., Chimera linkbase viewers.
But none of these user interfaces were truly satisfactory. Like other software development environment researchers and commercial developers, they were using single-user styles of user interface as clients for an inherently collaborative multi-user system. They realized then that they needed to develop groupviews: a user interface style whose core centers on collaboration. The best examples that could be found of such user interfaces were in extremely popular on-line games and socializing forums: 3D virtual worlds and MUDs. These forums are actively used by the general populace, schoolage children to the elderly, with no formal computer science training and often not even computer literacy training. Users pick it up through intuition from the physical world counterpart and informal peer help.
These insights led to Prof. Kaiser’s CHIME (Columbia Hypermedia IMmersion Environment) project, initiated this past year. One of the project’s most deeply seated tenets is to leverage success, such as achieved by 3D multi-player games and multi-user domains (MUDs), in devising usable, useful and used groupviews. Systems constructed using the CHIME infrastructure present their users with a 3D depiction of hypermedia and/or other information resources. Users visualize, and their avatars operate within, a collaborative virtual environment based on some metaphor selected to aid their intuition in understanding and/or utilizing the information of interest or relevant to the task at hand. Users “see” and interact with each other, when in close [virtual] proximity, as well as with the encompassing information space. Actions meaningful within the metaphor are mapped to operations appropriate for the information domain, such as invoking external tools, running queries or viewing documents.
A proof-of-concept implementation of CHIME has recently been developed. In the preliminary architecture, the base data from one or more sources is first mapped to extensible subtypes of the generic components: containers, connectors, components and behaviors, in a virtual model environment (VEM). This includes specifying relationships (connection and containment) among entities from the same and different sources, which might be imposed by the application rather than inherent in the data. A VEM is then mapped to extensible subtypes of multi-user domain facilities like rooms, doors or hallways between rooms, furnishings, object manipulations, and so on. These are in turn rendered and activated according to the chosen 3D theme world “plugin”, which can be dynamically loaded into the generic theme manager at run-time and thence transmitted to the user clients. The same VEM can be mapped simultaneously to multiple theme managers, which can be useful for debugging, administration and system monitoring (although it would probably too confusing for members of the same collaborative team to operate within significantly different “views”).
Thus an e-commerce web site peddling computer hardware might look and feel like an on-screen CompUSA; a digital library might be illustrated as, indeed, a library. Application domains without obvious physical counterparts might choose more whimsical themes. For example, a software development environment for an open-source system might map each source code package to a room on the Starship Enterprise, with the “main” subprogram represented by the bridge, amateur programmers proposing a modification could beam aboard, and so forth. Note these are just possibilities: CHIME is a generic architecture, no particular theme is built-in. But environment designers do not necessarily need to program since graphic textures and models can be supplied by third parties, and the specific layout and contents of a world are automatically generated according to an XML-based configuration. The environment designers must, of course, understand their backend repositories sufficiently to write the XML and corresponding processors, unless such meta-information is already supplied by the sources.
Columbia’s Workgroup Cache system also draws from Prof. Kaiser’s prior experience investigating process-centered environments. The early-90’s Laputa extension to Oz employed information about the software process or workflow to determine which documents to prefetch for later work while disconnected from the network (e.g., using a laptop). For example, Laputa might fetch all documents necessary for the completion of a selected task, plus documents necessary for tasks expected to follow that task shortly thereafter in the process. Workgroup Cache similarly considers workflow semantics to predict future data needs, but extends beyond Laputa by including the work processes of multiple users, i.e., multiple participants in the workflow, in its document prefetch criteria. Workgroup Cache also introduces recommendations, or pushes, of shared documents under certain circumstances.
A Workgroup Cache system operates as a virtual intranet, providing possibly remote cache sharing to members of the same workgroup. Criteria are associated with each workgroup to pull documents from an individual member’s cache or an outside information resource to the shared cache, or push from the shared cache to an individual cache or to a user’s screen. These criteria can (in principle) leverage any knowledge available about the content and usage of documents as a basis for prediction of future accesses and/or recommendations. Cache pull, replacement, and push criteria might be based on software process or workflow routing among workgroup members, document access patterns of workgroup members, or with XML metadata associated with or embedded in accessed documents. For example, if my supervisor keeps returning to such and such technical report over a recent time interval, or wrote it, then I might want to read it too. Criteria might be defined via simple filter rules, like Cisco firewalls or Web search engine queries, or via a very elaborate event/data pattern notation.
Workgroup membership can be determined in a number of ways: The users can be specified in advance, such as a software development team working closely together (although they might be physically dispersed). Or workgroups can be constituted and updated dynamically, say, by including users whose document accesses, or whose own home page links, match patterns associated with the workgroup. For instance, the amateur programmers actively working on the same subsystem of an open-source project like Linux might be automatically added to the corresponding subsystem’s workgroup when they submit updates. Users may be members of multiple workgroups at the same time.
Darkover
- Name:
Darkover - Source:
Columbia University, Department of Computer Science, Programming Systems Laboratory - Brief description:
Darkover is an object management system that provides persistent object storage and retrieval. It fully supports object-oriented data modeling and provides a C function-level interface for data access and manipulation. Darkover also has a query engine that supports both associative (SQL-like) and navigational queries.Darkover loads a schema definition written in Doddl (Darkover Data Definition Lauguage), which defines the classes in the objectbase. The schema definition can be written as a set of multiple files, and compiled by the schema compiler into the internal format that Darkover reads. The Evolver utility supports limited schema evolution of existing objectbases.A class specifies one or more superclasses, primitive attributes (integers, strings, timestamps, etc.), file attributes (pathnames to files in an intentionally opaque “hidden file system”), composite attributes in an aggregation hierarchy, and reference attributes allowing arbitrary 1-to-N relations among objects.
Darkover provides a C functional interface to access the objects in the objectbase. Each object is identified by a unique, persistent identifier. Given an object identifier (aka OID), functions are provided to access all the information about that object, such as it’s class, attributes, and related objects. There are also functions to construct the objectbase, such as adding objects, deleting objects, moving objects etc.
- Evaluation against applicable general dimensions:
- Availability: commercial/licensed/public domain
Licensed. - Cost:
Free to ftp PGP-encrypted tar file. Nominal cost to ship tape and manuals. - Degree of support/maturity/testing/usage:
Support available only to funding sponsors. Regular use as part of the Oz environment since November 1994. Was also been used by undergraduates students in the Introduction to Software Engineering course in Spring 1996. - Speed:
Depends on the actual application usage, size of objectbase and degree of schema complication. Reasonable overhead. - Computing platforms and OS:
SparcStation SunOS 4.1.3, Solaris 2.5.1 - Environment Dependencies:
C, lex and yacc. - Software Dependencies:
- Development:
Standard UNIX and gnu tools such as gcc, emacs, diff, rcs. - Execution:
None.
- Development:
- Language compatibilities:
Darkover is implemented in C, lex and yacc. - Footprint:
- Source Distribution: 960KB
- Source Installation: 960KB
- Binary Installation: 1.2MB
- Openness/integrability/source availability:
All sources available. - Extensibility:
Through the schema definition language. Darkover is provided as a C library and can be interfaced to other system components. - Pedigree:
The research that resulted in Darkover was funded by the Advanced Research Projects Agency, by the National Science Foundation, and by the New York State Science and Technology Foundation Center for High Performance Computing and Communications in Healthcare.
- Availability: commercial/licensed/public domain
- Contact person(s):
Marvel User Support
Programming Systems Laboratory
Department of Computer Science
Columbia University
500 West 120th Street
New York, NY 10027
marvelus@cs.columbia.edu
(212) 939-7184
fax: (212) 666-0140
Marvel
Marvel extended concepts founded in the consistency and automation rules of ISI’s CommonLisp Framework to more general software processes for mainstream programming languages and environments. In particular, each software development task or subtask was defined by a parameterized rule with preconditions, activity, and effects. The preconditions were required to be satisfied prior to performing the activity, and one of the mutually exclusive effects was asserted on completion of the activity. Opportunistic forward and backward chaining among these rules, to fulfill the preconditions and carry out the implications of the effects, automated some of the more menial segments of processes as well as guiding users through the human-oriented creative steps. Marvel’s client/server architecture, with a shared objectbase and process engine governed by a cooperative transaction manager, supported multi-participant processes. External tools and human activities were integrated into the process through scripted envelopes.
Marvel is a process-centered environment that supports teams of users working on medium to large scale projects (e.g., Marvel has been used to support development and maintenance of a software system with over a quarter million lines of code and ten developers). Marvel is completely generic, and can be used for a variety of applications besides software engineering, ranging from document processing, civil and mechanical engineering to network management, managed healthcare and education. An instantiated environment is created by an administrator who provides the data schema, process model, tool envelopes, and coordination model for a specific application; the typical user of the environment need not be concerned with these details. The schema classes define the structure of a coarse-grained object-oriented database to contain the relevant artifacts (legacy systems can be immigrated into a Marvel objectbase using the Marvelizer utility). The XView user interface supports graphical browsing and ad hoc queries; there is also a command line interface for dumb terminals and batch scripts.
The process (or workflow) is described in a process modeling language. Each process step is encapsulated in a rule that can be invoked from the user’s menu and provided with parameters by clicking on a graphical representation of the objectbase. The body of a rule consists of a query to bind local variables; a logical condition that is evaluated prior to initiating the activity; an optional activity in which an arbitrary external tool or application program may be invoked through an envelope; and a set of effects that each assert one of the activity’s alternative results. Marvel enforces the process, in the sense that the condition must be satisfied in order to execute the activity and effect; forward and backward chaining over the rule base automate tool invocations.
A user decides when to request a particular process step, and then Marvel enacts the process by selecting the rule(s) with matching name and signature, evaluating each of these rules until it finds one whose condition is already satisfied, or is satisfied by backward chaining. This rule’s activity, if any, is then executed. One of the effects is then selected, according to the result of the activity, and Marvel forward chains to all rules whose conditions become satisfied by this effect. However, if none of the conditions of the rule(s) matching the user command can be satisfied, then the user is informed that it is not possible to undertake that process step (at this time). Predicates in the condition and assertions in the effects may be annotated as atomicity vs. automation. By definition, all forward chaining through atomicity assertions to rules with satisfied conditions is mandatory. In contrast, chaining solely for automation purposes is optional. Possible chains are compiled into an efficient internal representation when the environment is instantiated.
Conventional file-oriented tools and application programs are integrated into a Marvel process without source modifications, recompilation or relinking through an enveloping language. The rule activity indicates the envelope name, input and output literals, and file attributes; the envelope’s implicit return code selects the actual effect from among those given. The body of an envelope is a conventional UNIX shell script.
A client/server architecture supports multiple participants in the same process. Each client provides the user interface, checks the arguments of commands, and invokes the appropriate envelope when an activity is executed, whereas the process engine, synchronization management and object management reside in the central Marvel server. Scheduling of client requests by the server is first-come-first-serve, with rule chains interleaved at the natural breaks when clients execute activities. Clients may run on different hosts from the server on the same LAN.
The coordination model includes a lock compatibility matrix and an ancestor lock table for composite objects. The default concurrency control policy distinguishes between chaining for atomicity vs. automation purposes. Chaining via atomicity assertions/predicates is treated as a conventional database transaction: if an atomicity chain incurs a lock conflict, the entire chain should be aborted (rolled back). In contrast, automation annotations define each rule as an independent transaction – and thus automation can be terminated (and not rolled back) at process step boundaries without completing the entire chain. A preliminary coordination modeling language specifies scenarios where this default policy can be relaxed, to increase concurrency and enhance collaboration. The administrator specifies concurrency control policies in terms of primitives to notify a user, abort a rule chain, suspend a rule chain until another has completed, or ignore the conflict.
Atlantis
OBJECTIVE
The Atlantis project investigates componentization of workflow modeling and execution systems, particularly synergy of process-centered environment and computer-supported cooperative work components. The components are intended to be interoperable with legacy and off-the-shelf tools and frameworks and indicate requirements on future systems, for a concrete transition path.
Approach
Atlantis is a consortium consisting of the Programming Systems Lab at Columbia University, the Advanced Collaborative Systems Lab at University of Illinois at Urbana-Champaign (now partially located at the Distributed Systems Technology Centre at University of Queensland, Australia) and, formerly, the US Applied Research Lab of Bull HN Information Systems (ARL has shut down; their DARPA contract was cancelled). The two academic groups have been working on their own diverse workflow-related projects for over a decade. They initiated cross-licensing with each other and with Bull to evaluate the prospects for integrating their technologies, culminating in a plan to investigate several important, practical problems not previously being pursued:
Processes (or workflows) can vary substantially across organizations and projects and are “situated”, i.e., dynamically changing in response to a variety of technological, sociological and political stimuli. A process-centered environment (PCE) provides computer-aided support for a range of project-specific processes. The general goals of research in this area are to devise useful paradigms for representing processes, to determine means by which environments may assist teams of users in carrying out processes, and to discover mechanisms that permit in-progress processes to evolve compatibly.
- Groupware. The main theme is to integrate human/human collaboration, studied in the computer-supported cooperative work (CSCW) community, with tool/tool integration, the forte of the software engineering community. The Illinois researchers had addressed process primarily from a CSCW perspective, e.g., Bull used their framework to develop a system for directing human behavior during document inspections. The Columbia lab has been working on multi-user PCEs, particularly enforcement of process constraints and automation of tool invocations to satisfy prerequisites and fulfill consequences of process steps. The open architecture for workflow systems draws from both lines of research, with the results originally intended to be transitioned by Bull into potential products.
- Process transition from current CASE and tool integration technology to PCEs by developing an external process server component that interprets project-specific process definitions. The open architecture supports mediation between such a server and applications, to minimize or avoid changes to pre-existing systems. The process server component provides a rule-based “process assembly language” into which the user organization’s choice of process modeling formalism (e.g., Petri nets, grammars, task graphs) can be translated, to be executed by the corresponding “process virtual machine”, thus lowering the barrier to adoption.
- Collaboration transition from current database transactions and “check-out” technology to collaborative workflow environments by developing an external cooperative transaction manager. It is widely agreed that the classical transaction model is inappropriate for long-duration, interactive and/or cooperative activities, but there is no consensus on the numerous extended transaction models that have been proposed, and it appears that different models may be needed for different applications. Thus the transaction-management component supplies primitives for defining project-specific concurrency control policies, analogous to process modeling. The open architecture enables mediation between the transaction manager and pre-existing systems, mapping task units to transaction-like constructs.
- Geographical distribution. Industrial-scale software development increasingly takes place outside the boundaries of a local area net, often spread across regions and/or independent organizations. Collaborating subcontractors may guard their own proprietary processes and tools, while sharing data subject to security constraints, so a model for “cooperating software processes” is needed. The open architecture extends workflow management and execution technology to interoperability among autonomously defined processes, with an Internet-capable PCE infrastructure.
FY96 Accomplishments
Database management systems (DBMSs) are increasingly used for advanced application domains, such as software development environments, network management, workflow management systems, CAD/CAM, and managed healthcare, where the standard correctness model of serializability is often too restrictive. The Atlantis project at Columbia University introduced the notion of a Concurrency Control Language (CCL) that allows a database application designer to specify concurrency control policies to tailor the behavior of a transaction manager; a well-crafted set of policies defines an extended transaction model. The necessary semantic information required by the CCL run-time interpreter is extracted from a task manager, a (logical) module by definition included in all advanced applications, which stores task models that encode the semantic information about the transactions submitted to the DBMS. Columbia designed a rule-based CCL, called CORD, and implemented a run-time interpreter that can be hooked to a conventional transaction manager to implement the sophisticated concurrency control required by advanced database applications. They developed an architecture for systems based on CORD, integrated the CORD interpreter with their own Pern Cooperative Transaction Manager and with the Exodus Storage Manager from University of Wisconsin, and implemented the well-known Altruistic Locking and Epsilon Serializability extended transaction models. CORD/Pern is being used by DARPA-funded projects at Brown University and University of Colorado.
The Atlantis project at Columbia University developed a process server component, called Amber. New systems could be constructed around the component, or existing non-process environment architectures enhanced with value-added process services (as in Columbia’s integration with Field from Brown University). Synergistic integration with existing process engines enables a degree of heterogeneity (as in Columbia’s coupling with their mockup of the TeamWare system from University of California at Irvine), and previous-generation process enactment facilities in existing environments could be replaced (as Columbia did with their own Oz system and with the ProcessWEAVER product from Cap Gemini). Amber also supports translation of higher-level process modeling formalisms, e.g., Petri nets, grammars and graphs, into its rule-based “process assembly language” for enactment, and permits addition of formalism-specific support to the process engine via an extension and parameterization mechanism. Thus the same process engine can support many different process modeling paradigms. Columbia developed translators for ProcessWEAVER’s Cooperative Procedures (Petri nets), Bill Riddle’s Activity Structures (concurrent regular expressions), and StateMate’s statecharts (finite state automata). Integration of Amber into some foreign system, and extension/parameterization of its process syntax and semantics, are both achieved through callbacks to a mediator consisting of special-purpose “glue” code. The Amber version of Oz is now used for all software development by the Columbia Atlantis group.
Black Box enveloping technology expects the tool integrator to write a special-purpose script to handle the details of interfacing between each COTS tool and the environment framework. Generally, the complete set of arguments from the environment’s data repository is supplied to the tool at its invocation and any results are gathered only when the tool terminates, so the tool execution is encapsulated within an individual task. This does not work very well for: Incremental tools that request parameters and/or return (partial) results in the middle of their execution, e.g., multi-buffer editors and interactive debuggers; Interpretive tools that maintain a complex in-memory state reflecting progress through a series of operations, e.g., “Knowledge-Based Software Assistants”; Collaborative tools that support direct interaction among multiple users, including asynchronous discussion and synchronous conferencing. The Atlantis project at Columbia University introduced a Multi-Tool Protocol (MTP), which enables submission of multiple tasks, either serially or concurrently, to the same executing tool instance on behalf of the same or different users. Single-user tools can thereby be converted to a floor-passing form of groupware, and a user can even send a running task to another user for assistance; these facilities assume X11. MTP also addresses multiple platforms: transmitting tool invocations to machines other than were the user is logged in, e.g., when the tool runs only on particular machine architecture or is licensed only for a specific host. Columbia implemented MTP as part of their Oz process-centered environment.
Componentization involves restructuring a stovepipe system into components that could potentially be reused in other systems, and/or re-engineering an old system to permit replacement of native code with new components. The Atlantis project at Columbia University developed a series of two processes, OzMarvel (running on top of the Marvel process-centered environment) and EmeraldCity (on top of the Oz process-centered environment, the successor to Marvel). Each was intended to support both aspects of componentization, but EmeraldCity addressed one requirement not fully understood when OzMarvel was developed: process support for co-existing components designed as alternatives to each other in plug-n-play style. Columbia used OzMarvel to replace Oz‘s native transaction management subsystem with a transaction manager component (Pern), and used EmeraldCity both to replace Oz‘s native object management system with an object-oriented database management system component (called Darkover) and to replace Oz‘s native process engine with a process server component (Amber).
FY97 Plan
The Atlantis project at Columbia University plans external release of a fully documented and robust version of the Oz process-centered environment framework ported to Solaris 2.5. They already have three “alpha” sites: University of Massachusetts, North Carolina State University, and University of West Virginia, but for an earlier SunOS 4.1.3 version. The official release will be downloaded from the World Wide Web, as their older Marvel 3.1.1 system is now.
Columbia University will complete their currently “proof-of-concept” World Wide Web browser user interface client to their Oz process-centered environment, so that it is fully functional for team software development on Solaris 2.5 and Windows NT. They are developing a general method for accessing legacy client/server applications from standard WWW browsers. An existing client for the system is modified to perform HTTP proxy server duties. Web browser users simply configure their browsers to use this proxy, and can then access the target system via specially encoded URLs that the proxy intercepts and sends to the existing server. The Web-based browser user interface to Oz is one example.
Columbia University will develop mechanisms to structure information so that the view of the World Wide Web, both within and across Web pages, is dynamically customizable. They will investigate an architecture that integrates environment data repositories with WWW to organize such dynamic structures for use in software development environments. Different users, or the same user at different times, could have different views of the Web. Their architecture will provide high flexibility for a wide variety of applications, ranging from managed healthcare to software development environments. A preliminary version will be realized in a “proof-of-concept” system based on Columbia’s Darkover object-oriented database component.
Technology Transition
Oz 1.x has been licensed in alpha form to three institutions (University of Massachusetts, North Carolina State University, and University of West Virginia). The first general release is planned for Fall 1996. Oz is a multi-site process-centered environment where each “site” autonomously defines its own process, data schema and tool wrappers (to integrate COTS tools). An arbitrary number of users connected to the same site participate in the same process. Sites may optionally form alliances that enable multi-process interoperability: Site administrators negotiate Treaties — agreed-upon shared subprocesses, and then the local environments may coordinate Summits — Treaty-defined process steps that involve data and/or users from multiple sites. Multi-site environment instances may involve multiple teams and organizations dispersed across the Internet, e.g., Oz has been run between Columbia and the (now defunct) Bull lab in Billerica, MA. Sites may alternatively co-reside within the same local area network, e.g., a two-process Oz environment is used in-house every day, one process for the shared release area and another for private workspaces, and a three-process Oz environment was previously used when Columbia re-engineered their system to replace the native object management system with an object-oriented database components.
Oz is most commonly used for software development processes, but environment instances have also been developed for document authoring and medical care plan automation (the latter in collaboration with Columbia-Presbyterian Medical Center). The alpha release, Oz 1.1.1, runs on SunOS 4.1.3 and provides XView and tty user interfaces. Oz 1.3, the version planned for release, will run on Solaris 2.5 and include a platform-independent World Wide Web browser-based user interface. Oz has been used for its own support since April 1995.
Oz‘s single-site, single-process (but multi-user) predecessor, Marvel 3.x, has been licensed to approximately 45 institutions (including 13 companies and the Naval Research Laboratory). It is now available for downloading over the World Wide Web (../download.html), so an exact count is no longer possible. Marvel 3.1.1 was the final released version, and has been available since May 1994. Marvel has previously been released for SunOS 4.1.x, Ultrix 4.2 and AIX 3.2, but only the SunOS binaries are currently available. XView and tty user interfaces are provided. Marvel was used for its own support, starting in January 1992, and later for initial Oz development.
Marvel is useful as either a self-contained multi-user process-centered environment (the user organization supplies its own process and tools or tailors the samples provided), or as a component for constructing a proprietary environment. The most serious commercial user is AT&T. AT&T has employed successive versions of Marvel since 1992 through the present as the workflow process engine component of their Provence system. Provence operates in a completely non-intrusive manner, hiding the workflow details from the user, who goes about his/her daily work in the normal manner. However, Provence monitors workflow steps as they occur and automatically performs various bookkeeping chores on behalf of the users to improve their productivity. AT&T is also employing Marvel to model and analyze the processes of several internal business units. For example, Marvel has been used to model the Initial Modification Request initiation and Tracking Process for the 5ESS Switching Software, and to model the Signaling Fault Detection and Tracking Process for the Technology Control Center. The AT&T contact states that both models resulted in better descriptions and formalization of the processes, and in the ability to run simulations and operational scenarios for the purpose of analyzing and visualizing the processes. In one case a significant. money-losing flaw in a process was uncovered during modeling. Click here for public information from AT&T. Contact Dr. Naser Barghouti, AT&T Research, 3D-552, 600 Mountain Avenue, Murray Hill, NJ 07974, 908-582-4384, naser@research.att.com.
A unmoderated user newsgroup is available for Marvel and Oz users and potential users. To be added to this mailing list, send email to majordomo@cs.columbia.edu with the words “subscribe oz-future” in the body of the message. Messages can be posted by sending email to oz-future@cs.columbia.edu (ozzy-n-webbiet@cs.columbia.edu is an alias). Note the newsgroup is NOT moderated: Everything posted will automatically be sent to every member of the mailing list (to which all current licensees were initially subscribed).
Pern 1.x has been licensed in alpha form to two institutions (Brown University and University of Colorado). Columbia plans the first general release for Fall 1996. Pern is a transaction manager component that supports both conventional and policy-based concurrency control, the latter through CORD below, and conventional recovery logging. It works with any kind of data repository that uniquely identifies its entities. Integration with legacy systems, including environment frameworks and database management systems, is implemented via a mediator-based architecture. Pern has been integrated into the Oz prototype above and experimentally interfaced to Cap Gemini’s ProcessWEAVER product and GIE Emeraude’s PCTE product. Pern includes CORD, a rule-based language (no relation to Oz or Marvel process rules) for defining concurrency control policies and extended transaction models (ETMs) that relax conventional serializability. Application-specific semantics are drawn from a task management layer (e.g., a process or workflow engine) via the mediators above and dynamically linked helper functions. Example ETM specifications have included altruistic locking, epsilon serializability, and sagas. CORD can be used independently of Pern, and has been experimentally integrated with U. Wisconsin’s Exodus database management system. Pern/CORD runs on SunOS 4.1.3 and Solaris 2.4+. Pern has been used daily as part of Oz since April 1995.
Potential licensees for Oz or Pern may request information from MarvelUS@cs.columbia.edu. Bugs should also be reported there. Potential Marvel licensees should consult the website mentioned above; Marvel is no longer supported, and users will be encouraged to upgrade to Oz.