Maximize
Bookmark

VX Heaven

Library Collection Sources Engines Constructors Simulators Utilities Links Forum

Computer Viruses: Myth or Reality?

Howard Israel
National Computer Security Conference Proceedings (10th): Computer Security - From Principles to Practices, 21-24 September 1987, pp.226-230
September 1987

2
[Back to index] [Comments]
\text{T_EX size}
Howard Israel
National Computer Security Center
9800 Savage Rd.
Fort George G. Meade, MD 20755 6000

Abstract

This paper will show that a computer virus [COHEN] may be no more a threat to computer systems than a Trojan Horse and any protection mechanism that will work against a a Trojan Horse will also work against against computer virus, specifically a mandatory policy (e.g. [BELL/LAP] [BIBA]) In addition, it will discuss two possible protection mechanissms that address the Trojan Horse threat.

Background

A computer virus is a program that propagates itsel [COHEN]. Depending upon its design, a virus may propagate itself on a limited basis or more extensively through the file system. That is it may selectively propagate itself so that only one copy exists at any one time in the system [THOMPSON]. It may slowly spread through the system or it may propagate as fast and as often as possible in the system.

A virus may act as a Trojan Horse [ANDERSON] (hereafter referred to as a 'viritic Trojan Horse') by performinng an overt action (the advertised purpose of the code that the executor expects to occur), a covert action (typically benefiting the author and harming the executor of the Trojan Horse, which the executor does not expect to occur) and then propagate itself to other areas in the file system taking advantage of the executor's privileges and rights. Because a Viritic Trojan Horse can 'flow through' the system (via the viritic feature) it may increase the likelihood of execution and number of executions.

D. J. Edwards identified the Trojan Horse attack in [ANDERSON]. In [KARGER]. the concept of a Trojan Horse propagating itself was discussed, although there was no distinction made between a Trojan Horse that was viritic or not. The ARPANET collapse on October 27, 1980 was attributed to the accidental propagation, of a virus [NEUMANN]. There are even references to viruses in modern science-fiction novels [BRUNNER].

Part I Comments on Recent Research

1 Measuring Infection Times

To show that a viritic Trojan Horse was a significant threat beyond a non viritic Trojan Horse, it would be necessary to compare the infection time [COHEN] of a viritic Trojan Horse against a comparable non viritic Trojan Horse. The use of a control group should adaquately show if the viritic attribute will have an additional significant affect on the Trojan Horse threat.

This author welcomes any research in this area. For, if done properly, it will show a viritic Trojan Horse to be either a more serious threat than a non viritic Trojan Horse or of no greater consequence. However, highly variable factors that will change over the life of the experiment include "the enticement" (this includes the advertised overt capability of the program as well as the methods used to "sell" it to the target user community) to execute the Trojan Horse and the knowledge of the user community, as well as other sorted variables such as: user activity level, time of day etc. Researchers doing work in this area should be scrutinized fully when presenting results because of these "field" variables. Therefore, experiments must be designed and executed very carefully before any results should be considered credible.

2. Virus Affects on Systems with Fundamental Flaws in Security Policies

[COHEN] discusses the virus experiments that show "fundamental flaws in security policies". Any fundamental flaw found in a security policy need not necessarily use a virus to display the weakness. A non-viritic Trojan Horse should succeed in (demonstrating any weakness sufficiently. There is no perceived advantage in using a viritic Trojan Horse (as opposed to a non viritic Trojan Horse) to demonstrate a flaw in a secuity policy.

Although it may be easier, in sarne cases, to achieve a particular objective by using a viritic Trojan Horse, it has not been shown, nor does this author believe that it can be shown that there is an objective a viritic Trojan Horse can achieve that a non viritic Trojan Horse can not achieve on currently used computer systems.

It is also interesting to note that the experiments performed [COHEN] were executed on systems that either did not have an enforced mandatory "security policy" at all (i.e., UNIX, VM/370, VMS, Tops 20) or had only a partial implementation of a mandatory security policy (i.e., OS/1100 on tne Univac 1108) [LEE], thereby, proving the obvious. The following discussion will describe the affects a Trojan Horse can have on a system that enforces a mandatory policy.

The model described by [BELL/LAP] protects systems against unauthorized disclosure as defined in a specific policy. A Trojan Horse would have to take advantage of a covert channel to disclose information (in a properly implemented [BELL/LAP] system). The same holds true for a viritic Trojan Horse. Earlier work [COHEN] made the implication that the Univac 1108 fully implements the [BELL/LAP] model. This is not the case. OS/1100, as delivered by the vendor, has the concept of "security levels" and enforces the simple-security condition, but it does not enforce the *-property [LEE].

Note: a Trojan Horse whose puroose is to violate the integrity of a system [BIBA] could easily succeed in a system that only enforces the [BELL/LAP] model. Thus, it is always true that a system can only protect what it is designed to protect and not necessarily more.

A system that enforces an integrity model [BIBA] would protect against a Trojan Horse (viritic or not) that attempts to violate the integrity policy. In [COHEN] an erroneous conclusion that a system with both an integrity policy [BIBA] and security policy [BELL/LAP] must provide isolation was arrived. This would be true only if a single label were used for both the security and integrity policy enforcement (see Table D below) [SCHELL]. One must consider the case described in Table C (i.e., that both policies may exist concurrently in a system without forming an isolation, or complete partition between security levels [SCHELL]). The following simplified example illustrates this:

Assume:

"TS" and "U" are both clearances (on users) and labels (on objects) that enforce the security policy (i.e., read policy).

"H' and "L" are both clearances (on users) and labels (on objects) that enforce the integrity policy (i.e., write policy).

A "TS" labeled object is more sensitive to disclosure than a "U" labeled object. A "TS" cleared user (subject) is not permitted to write "TS" objects to a "U" cleared user (subject). A "U" cleared user (subject) is not permitted to read a "TS" object.

An "H" labeled object is more sensitive to modification and creation than an "L" labeled object. An "L" cleared user (subject) is not permitted to write an "H" object. An "H" cleared user (subject) is not permitted to read an "L" object.

Access modes:
RRead
WWrite
NullNone

Permissable actions:

 Object
TSU
SubjectTSRWR
UWRW

Table A: Security policy (simplified).

As shown in Table A, the basic concern is to prevent an untrusted subject fron reading sensitive objects. The flow of information tends to be from least sensitive to most sensitive ("U" to "TS").

Permissable actions:

 Object
HL
SubjectHRWW
LRRW

Table B: Integrity Policy (simplified).

In table B, the basic concern is to prevent an untrusted subject from writing (or creating) a high integrity object. The flow of information is from high integrity to low integrity ("H" to "L").

Permissable actions:

 Object
TS/HTS/LU/HU/L
SubjectTS/HRWWRNull
TS/LRRWRR
U/HWWRWW
U/LNullWRRW

Table C: Intersection of both a security and integrity policy.

Table C shows the relationship between security and integrity. It represents the intersection of the security and integrity policies defined above. A "U/L" subject can neither Read nor Write on a "TS/H" object. A "TS/H" subject can neither Read nor Write on a "U/L" object. These are desirable features, for they will stop the flow of a viritic Trojan Horse from one partition to the next, while still permitting the controlled sharing of infortiration.

Permnissable actions:

 Object
TSU
SubjectTSRWNull
UNullRW

Table D: Subject/object relationship when the same label is used for both "security" decisions and "integrity" decisions.

Table D shows the permissable actions that can occur on a system where the same label is used for both security and integrity decisions. The result is isolation between the two classes of users.

Summary

An enforced disclosure and integrity policy can provide an effective means of stopping several classes of Trojan Horse (both viritic and non-viritic) attacks, provided the mechanisms are defined in consideration of each other. These policies will not have an affect on attacks that invoke Denial-of-Service problems on a system, as the disclosure and integrity policies mentioned do not address Denial-of-Service issues.

While the above simnlified example demonstrates the correctness of the approach, by allowing one catagory to be added to both the security and integrity labels each, the complexity of the access matrix increases to 256 different access cases (16x16). Although this may appear overwhelming, the defined policies can still be easily enforced, no matter how many levels and how large the catagory sets are defined for both the security and integrity policies.

There are systems available today that enforce a mandatory policy [MULTICS] [SCOMP]. These systems will be able to provide protection against Trojan Horse (viritic or not) attacks that attempt to violate the enforced mandatory policy.

Part II: Possible Methods to Defeat Viritic Trojan Horses

1. Comparison Utility

Without considering the objective of the Trojan Horse, it appears much easier to detect the presence of a viritic Trojan Horse that has successfully propagated itself (i.e., more than one copy of the virus exists in the system), than it would a non-viritic Trojan Horse. This proposed detection method would use a comparsion utility to show the use of similiar code in different files. Any similiar code discovered may or may not be for legitimate reasons.

Consider a file system that has "n" files. It would require:

\frac{n(n+1)}{2}

comparisons on the files to completely detect a successfuly propagated Trojan Horse (i.e., viritic Trojan Horse). If during the comrparison process, code is found common to two programs, they would then be considered suspect. It would be necessary to "review by hand" to confirm or deny the presence of the viritic Trojan Horse. The code review would point out whether the "common code" has a valid purpose. What is being detected are similarities in code that, in principle, should not exist. This method is independent of the function of the (viritic) Trojan Horse. That is, it does not matter what the purpose of the viritic Trojan Horse would be to detect its existence.

This method could not be used to detect a non-viritic Trojan Horse for obvious reasons (i.e., only one copy of the Trojan Horse may exist, not several, as is likely, but not neccesary [THOMPSON] with a viritic Trojan Horse).

Given the above possible solution to detecting a viritic Trojan Horse, several details remain. Detection depends upon how good the comparison utility is. It also depends upon how well the viritic Trojan Horse succeeds in implanting its "child" into innocuous programs.

For a viritic Trojan Horse to implant itself successfully, it would have to be implanted in such a way as to guarantee:

  1. that the target program would remain operative, and
  2. that the virus would be put into a location such that the (entire) viritic aspect would be guaranteed to be executed.

If either of the two preceeding conditions were not met, the success of the viritic Trojan Horse would be jeopardized.

One way to defeat the above detection would be for the viritic Trojan Horse to propagate itself such that the child's "likeness" was not the same as the parent's "likeness" (i.e., the code appeared different enough such that the comparision utility could not detect the similarity). This is perceived as a difficult, although not impossible, problem.

2. Spawning an Untrusted Process

By enfocing the least-privileged concept on a process by process basis, it is possible to provide a safe environment to execute untrusted code (which may contain a Trojan Horse) [DOWNS].

When a process wants to execute "untrusted" code (which the executor suspects contains either a viritic or non-viritic Trojan Horse), the process could then spawn a child process, which would include any necessary data. As long as the child's process access rights are limited with respect to the parent's process access rights, the parent process (and all associated data files) would be safe. Of course, anything in the system that the child process can access is a potential victim to the Trojan Horse, including other information located in the child's process (e.g., data deemed necessary to execute the untrusted code) and the results of the executed program.

If one considers the child process to be temporary (i.e., for the life of execution of the untrusted program) and the user can terminate the program at will, then the user will be able to protect the information managed by the parent process, which is the goal of this exercise.

This can be considered analogous to what a system administrator can do today. If an admninistrator (with the appropriate system privileges) wanted to execute untrusted code (e.g., a game program) he could perform the following:

  1. set up an account, such that the new account had no access to the administrator's privileged account or access to anything but publicly readable files.
  2. login to the new unprivileged account.
  3. execute the game program
  4. delete the unprivileged account.

Of course, if the unprivileged account had write access to any file in the system, the untrusted code (e.g., game program) could propagate a viritic Trojan Horse into the unprotected file, and thus be subject to further execution.

This is in addition to the obvious risk of unintentional disclosure, modification, or deletion of any data given to the game program to accomplish its task. (This risk would be nonexistent if the untrusted code did not need any user supplied data.)

The remaining probler with this scenerio, then is that the untrusted code could invoke a Denial-of-Service attack on both the user's process and the system. Since a well accepted model is lacking in this area, no solution is proposed.

Conclusion

A viritic Trojan Horse (i.e., computer virus) presents no new threat to computer systems. If the Trojan Horse problem were solved (for any class of Trojan Horse problems), the viritic Trojan Horse problem would also be solved (for that same class). Any solution to the Trojan Horse problem would also be a solution to the viritic Trojan Horse problem.

A security policy and an integrity policy (used in conjunction, in an intelligent manner) provide a reasonable protection scheme against Trojan Horse (either viritic or not) attacks. A Trojan Horse (viritic or not) may still invoke a Denial-of-Service problem, unless a model addressing this issue can be stated and enforced in a system.

While a viritic Trojan Horse is interesting, in the fact that it presents many novel attacks, it is no nore dangerous than a non-viritic Trojan Horse attack. The viritic aspect of a Trojan Horse appears to be more of a red-herring, in the sense that it has taken attention fom the basic problem.

Two partial solutions have been discussed. Each must be explored and experimented with in more detail. Better solutions for more classes of Trojan Horse attacks need to be advanced.

Acknowledgments

I would like to thank A. Arsenault, S. LaFountain, R. Morris, and G. Wagner for their insightful conmments on early drafts of this paper. I would also like to thank J. Beckman and O. Saydjari for participating in interesting conversations on this topic. Thanks also go to S. O'Brien, C. Schiffman and R. Winder for their careful review of this paper and A. Arsenault, D. Gary and M. Tinto for their continuous encouragement.

References

[Back to index] [Comments]
By accessing, viewing, downloading or otherwise using this content you agree to be bound by the Terms of Use! vxer.org aka vx.netlux.org
deenesitfrplruua