Asterisx Limpio Results - Fortify Security Report
Asterisx Limpio Results - Fortify Security Report
Asterisx Limpio Results - Fortify Security Report
Executive Summary
Issues Overview
On Jul 5, 2018, a source code review was performed over the analisis_id code base. 1,169 files, 86,844 LOC (Executable) were
scanned and reviewed for defects that could lead to potential security vulnerabilities. A total of 2802 reviewed findings were
uncovered during the analysis.
Project Summary
Code Base Summary
Code location: /usr/src/asterisk-15.4.1/pjproject
Number of Files: 1169
Lines of Code: 86844
Build Label: <No Build Label>
Scan Information
Scan time: 23:07
SCA Engine version: 6.21.0005
Machine Name: jhairofc-pc
Username running scan: root
Results Certification
Results Certification Valid
Details:
Results Signature:
Rules Signature:
Attack Surface
Attack Surface:
Private Information:
null.null.null
System Information:
null.null.gethostname
null.null.strerror
null.null.strerror_r
null.null.uname
Folder Filters:
If [fortify priority order] contains critical Then set folder to Critical
If [fortify priority order] contains high Then set folder to High
If [fortify priority order] contains medium Then set folder to Medium
Results Outline
Overall number of results
The scan found 2802 issues.
Number of Issues
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The function server_thread() in http_client.c might be able to write outside the bounds of allocated memory on line 140, which
could corrupt data, cause the program to crash, or lead to the execution of malicious code.
Explanation:
Buffer overflow is probably the best known form of software security vulnerability. Most software developers know what a
buffer overflow vulnerability is, but buffer overflow attacks against both legacy and newly-developed applications are still quite
common. Part of the problem is due to the wide variety of ways buffer overflows can occur, and part is due to the error-prone
techniques often used to prevent them.
In a classic buffer overflow exploit, the attacker sends data to a program, which it stores in an undersized stack buffer. The result
is that information on the call stack is overwritten, including the function's return pointer. The data sets the value of the return
pointer so that when the function returns, it transfers control to malicious code contained in the attacker's data.
Although this type of stack buffer overflow is still common on some platforms and in some development communities, there are
a variety of other types of buffer overflow, including heap buffer overflows and off-by-one errors among others. There are a
number of excellent books that provide detailed information on how buffer overflow attacks work, including Building Secure
Software [1], Writing Secure Code [2], and The Shellcoder's Handbook [3].
At the code level, buffer overflow vulnerabilities usually involve the violation of a programmer's assumptions. Many memory
manipulation functions in C and C++ do not perform bounds checking and can easily overwrite the allocated bounds of the
buffers they operate upon. Even bounded functions, such as strncpy(), can cause vulnerabilities when used incorrectly. The
combination of memory manipulation and mistaken assumptions about the size or makeup of a piece of data is the root cause of
most buffer overflows.
Buffer overflow vulnerabilities typically occur in code that:
- Relies on external data to control its behavior.
- Depends upon properties of the data that are enforced outside of the immediate scope of the code.
- Is so complex that a programmer cannot accurately predict its behavior.
Example 2.a: The following sample code demonstrates a simple buffer overflow that is often caused by the first scenario in
which the code relies on external data to control its behavior. The code uses the gets() function to read an arbitrary amount of
data into a stack buffer. Because there is no way to limit the amount of data read by this function, the safety of the code depends
on the user to always enter fewer than BUFSIZE characters.
...
char buf[BUFSIZE];
gets(buf);
...
Example 2.b: This example shows how easy it is to mimic the unsafe behavior of the gets() function in C++ by using the >>
operator to read input into a char[] string.
...
char buf[BUFSIZE];
cin >> (buf);
...
Example 3: The code in this example also relies on user input to control its behavior, but it adds a level of indirection with the
use of the bounded memory copy function memcpy(). This function accepts a destination buffer, a source buffer, and the number
of bytes to copy. The input buffer is filled by a bounded call to read(), but the user specifies the number of bytes that memcpy()
copies.
...
char buf[64], in[MAX_SIZE];
printf("Enter buffer contents:\n");
read(0, in, MAX_SIZE-1);
printf("Bytes to copy:\n");
scanf("%d", &bytes);
memcpy(buf, in, bytes);
...
Note: This type of buffer overflow vulnerability (where a program reads data and then trusts a value from the data in subsequent
memory operations on the remaining data) has turned up with some frequency in image, audio, and other file processing libraries.
Example 4: The following code demonstrates the third scenario in which the code is so complex its behavior cannot be easily
predicted. This code is from the popular libPNG image decoder, which is used by a wide array of applications, including Mozilla
and some versions of Internet Explorer.
The code appears to safely perform bounds checking because it checks the size of the variable length, which it later uses to
control the amount of data copied by png_crc_read(). However, immediately before it tests length, the code performs a check on
png_ptr->mode, and if this check fails a warning is issued and processing continues. Since length is tested in an else if block,
length would not be tested if the first check fails, and is used blindly in the call to png_crc_read(), potentially allowing a stack
buffer overflow.
Although the code in this example is not the most complex we have seen, it demonstrates why complexity should be minimized
in code that performs memory operations.
Example 5: This example also demonstrates the third scenario in which the program's complexity exposes it to buffer overflows.
In this case, the exposure is due to the ambiguous interface of one of the functions rather than the structure of the code (as was
the case in the previous example).
The getUserInfo() function takes a username specified as a multibyte string and a pointer to a structure for user information, and
populates the structure with information about the user. Since Windows authentication uses Unicode for usernames, the
username argument is first converted from a multibyte string to a Unicode string. This function then incorrectly passes the size of
unicodeUser in bytes rather than characters. The call to MultiByteToWideChar() may therefore write up to
(UNLEN+1)*sizeof(WCHAR) wide characters, or
(UNLEN+1)*sizeof(WCHAR)*sizeof(WCHAR) bytes, to the unicodeUser array, which has only (UNLEN+1)*sizeof(WCHAR)
bytes allocated. If the username string contains more than UNLEN characters, the call to MultiByteToWideChar() will overflow
the buffer unicodeUser.
Number of Issues
01234567
10
8
11
9
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
100
97
101
98
102
99
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The format string argument to snprintf() at os_info.c line 331 does not properly limit the amount of data the function can write,
which allows the program to write outside the bounds of allocated memory. This behavior could corrupt data, crash the program,
or lead to the execution of malicious code
Explanation:
Buffer overflow is probably the best known form of software security vulnerability. Most software developers know what a
buffer overflow vulnerability is, but buffer overflow attacks against both legacy and newly-developed applications are still quite
common. Part of the problem is due to the wide variety of ways buffer overflows can occur, and part is due to the error-prone
techniques often used to prevent them.
In a classic buffer overflow exploit, the attacker sends data to a program, which it stores in an undersized stack buffer. The result
is that information on the call stack is overwritten, including the function's return pointer. The data sets the value of the return
pointer so that when the function returns, it transfers control to malicious code contained in the attacker's data.
Although this type of stack buffer overflow is still common on some platforms and in some development communities, there are
a variety of other types of buffer overflow, including heap buffer overflows and off-by-one errors among others. There are a
number of excellent books that provide detailed information on how buffer overflow attacks work, including Building Secure
Software [1], Writing Secure Code [2], and The Shellcoder's Handbook [3].
At the code level, buffer overflow vulnerabilities usually involve the violation of a programmer's assumptions. Many memory
manipulation functions in C and C++ do not perform bounds checking and can easily exceed the allocated bounds of the buffers
they operate upon. Even bounded functions, such as strncpy(), can cause vulnerabilities when used incorrectly. The combination
of memory manipulation and mistaken assumptions about the size or makeup of a piece of data is the root cause of most buffer
overflows.
In this case, an improperly constructed format string causes the program to write beyond the bounds of allocated memory.
Example: The following code overflows c because the double type requires more space than is allocated for c.
void formatString(double d) {
char c;
scanf("%d", &c)
}
Recommendations:
Although the careful use of bounded functions can greatly reduce the risk of buffer overflow, this migration cannot be done
blindly and does not go far enough on its own to ensure security. Whenever you manipulate memory, especially strings,
remember that buffer overflow vulnerabilities typically occur in code that:
- Relies on external data to control its behavior.
- Depends upon properties of the data that are enforced outside of the immediate scope of the code.
- Is so complex that a programmer cannot accurately predict its behavior.
Additionally, consider the following principles:
- Never trust an external source to provide correct control information to a memory operation.
- Never trust that properties about the data your program is manipulating will be maintained throughout the program. Sanity
check data before you operate on it.
Number of Issues
0123456789
111
12
013
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
100
99
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The function http_on_data_read() in http_client.c is declared to return an unsigned value, but on line 421 it returns a signed
value.
Explanation:
It is dangerous to rely on implicit casts between signed and unsigned numbers because the result can take on an unexpected value
and violate weak assumptions made elsewhere in the program.
Example: In this example, depending on the return value of accecssmainframe(), the variable amount can hold a negative value
when it is returned. Because the function is declared to return an unsigned value, amount will be implicitly cast to an unsigned
number.
If the return value of accessmainframe() is -1, then the return value of readdata() will be 4,294,967,295 on a system that uses 32-
bit integers.
Conversion between signed and unsigned values can lead to a variety of errors, but from a security standpoint is most commonly
associated with integer overflow and buffer overflow vulnerabilities.
Recommendations:
Although unexpected conversion between signed and unsigned quantities typically creates general quality problems, depending
on the assumptions that a conversion violates, it can lead to serious security risks. Pay attention to compiler warnings related to
signed/unsigned conversions. Some programmers may believe that these warnings are innocuous, but in some cases they point
out potential integer overflow problems.
Number of Issues
012345678910
111
2131
4151
6171
8292
02122
3242
5262
7283
9303
13233
4353
6373
8494
0414
24344
5464
7485
9505
1525
35455
6575
8696
0616
2636
46566
7687
9707
1727
3747
5767
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
Attackers are able to control the file system path argument to fopen() at file_io_ansi.c line 63, which allows them to access or
modify otherwise protected files.
Explanation:
Path manipulation errors occur when the following two conditions are met:
1. An attacker is able to specify a path used in an operation on the file system.
2. By specifying the resource, the attacker gains a capability that would not otherwise be permitted.
For example, the program may give the attacker the ability to overwrite the specified file or run with a configuration controlled
by the attacker.
Example 1: The following code uses input from a CGI request to create a file name. The programmer has not considered the
possibility that an attacker could provide a file name such as "../../apache/conf/httpd.conf", which will cause the application to
delete the specified configuration file.
Example 2: The following code uses input from the command line to determine which file to open and echo back to the user. If
the program runs with adequate privileges and malicious users can create soft links to the file, they can use the program to read
the first part of any file on the system.
ifstream ifs(argv[0]);
string s;
ifs >> s;
cout << s;
Recommendations:
The best way to prevent path manipulation is with a level of indirection: create a list of legitimate resource names that a user is
allowed to specify, and only allow the user to select from the list. With this approach the input provided by the user is never used
directly to specify the resource name.
In some situations this approach is impractical because the set of legitimate resource names is too large or too hard to keep track
of. Programmers often resort to blacklisting in these situations. Blacklisting selectively rejects or escapes potentially dangerous
characters before using the input. However, any such list of unsafe characters is likely to be incomplete and will almost certainly
become out of date. A better approach is to create a whitelist of characters that are allowed to appear in the resource name and
accept input composed exclusively of characters in the approved set.
Tips:
1. If the program is performing custom input validation you are satisfied with, use the Fortify Custom Rules Editor to create a
cleanse rule for the validation routine.
2. Implementation of an effective blacklist is notoriously difficult. One should be skeptical if validation logic requires
blacklisting. Consider different types of input encoding and different sets of meta-characters that might have special meaning
when interpreted by different operating systems, databases, or other resources. Determine whether or not the blacklist can be
updated easily, correctly, and completely if these requirements ever change.
Number of Issues
0 1 2 3 4 5 6 7 8 9101112131415161718192021222324252627282930313233343536373839404142434445464748
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The function server_thread() in http_client.c relies on proper string termination when it calls strlen() on line 122, but the source
buffer might not contain a null terminator. A buffer overflow is possible.
Explanation:
String termination errors occur when:
1. Data enters a program via a function that does not null terminate its output.
2. The data is passed to a function that requires its input to be null terminated.
Example 1: The following code reads from cfgfile and copies the input into inputbuf using strcpy(). The code mistakenly
assumes that inputbuf will always contain a null terminator.
The code in Example 1 will behave correctly if the data read from cfgfile is null terminated on disk as expected. But if an
attacker is able to modify this input so that it does not contain the expected null character, the call to strcpy() will continue
copying from memory until it encounters an arbitrary null character. This will likely overflow the destination buffer and, if the
attacker may control the contents of memory immediately following inputbuf, can leave the application susceptible to a buffer
overflow attack.
Example 2: In the following code, readlink() expands the name of a symbolic link stored in the buffer path so that the buffer buf
contains the absolute path of the file referenced by the symbolic link. The length of the resulting value is then calculated using
strlen().
...
char buf[MAXPATH];
...
readlink(path, buf, MAXPATH);
int length = strlen(buf);
...
The code in Example 2 will not behave correctly because the value read into buf by readlink() will not be null terminated. In
testing, vulnerabilities like this one might not be caught because the unused contents of buf and the memory immediately
following it may be null, thereby causing strlen() to appear as if it is behaving correctly. However, in the wild strlen() will
continue traversing memory until it encounters an arbitrary null character on the stack, which results in a value of length that is
much larger than the size of buf and may cause a buffer overflow in subsequent uses of this value.
Traditionally, strings are represented as a region of memory containing data terminated with a null character. Older string-
handling methods frequently rely on this null character to determine the length of the string. If a buffer that does not contain a
null terminator is passed to one of these functions, the function will read past the end of the buffer.
...
char buf[MAXPATH];
int size = readlink(path, buf, MAXPATH);
if (size != -1){
buf[size] = '\0';
strncpy(filename, buf, MAXPATH);
length = strlen(filename);
}
...
By calling strlen(), the programmer relies on a string terminator. The programmer has attempted to explicitly null terminate the
buffer in order to guarantee that this dependency is always satisfied. The problem with this approach is that it is error-prone. In
this example, if readlink() returns MAXPATH, then buf[size] will refer to a location outside of the buffer; strncpy() will fail to
null terminate filename; and strlen() will return an incorrect (and potentially huge) value.
2. On Windows, less secure functions like strcpy() can be replaced with their more secure versions, such as strcpy_s(). However,
this still needs to be done with caution. Because parameter validation provided by the _s family of functions varies, relying on it
can lead to unexpected behavior. Furthermore, incorrectly specifying the size of the destination buffer can still result in buffer
overflows and null termination errors.
Number of Issues
0 1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The function https_client_test() in ssl_sock.c references a freed memory location on line 454.
Explanation:
Use after free errors occur when a program continues to use a pointer after it has been freed. Like double free errors and memory
leaks, use after free errors have two common and sometimes overlapping causes:
- Error conditions and other exceptional circumstances.
- Confusion over which part of the program is responsible for freeing the memory
Use after free errors sometimes have no effect and other times cause a program to crash. While it is technically feasible for the
freed memory to be re-allocated and for an attacker to use this reallocation to launch a buffer overflow attack, we are unaware of
any exploits based on this type of attack.
Example: The following code illustrates a use after free error:
While this technique prevents the freed memory from being used again, if there is still confusion about when the memory is
supposed to be freed, assigning the NULL to the pointer can result in a null pointer dereference. In most cases this is probably an
improvement, because the error is more likely to be caught during testing and is less likely to lead to an exploitable vulnerability.
It transforms an error with unpredictable behavior into an error that is easier to debug.
Number of Issues
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The function on_read_complete() in resolver.c can crash the program by dereferencing a null pointer on line 1724.
Explanation:
Null pointer exceptions usually occur when one or more of the programmer's assumptions is violated. There are at least three
flavors of this problem: check-after-dereference, dereference-after-check, and dereference-after-store. A check-after-dereference
error occurs when a program dereferences a pointer that can be null before checking if the pointer is null. Dereference-after-
check errors occur when a program makes an explicit check for null, but proceeds to dereference the pointer when it is known to
be null. Errors of this type are often the result of a typo or programmer oversight. A dereference-after-store error occurs when a
program explicitly sets a pointer to null and dereferences it later. This error is often the result of a programmer initializing a
variable to null when it is declared.
Most null pointer issues result in general software reliability problems, but if an attacker can intentionally trigger a null pointer
dereference, the attacker may be able to use the resulting exception to bypass security logic in order to mount a denial of service
attack, or to cause the application to reveal debugging information that will be valuable in planning subsequent attacks.
Example 1: In the following code, the programmer assumes that the variable ptr is not NULL. That assumption is made explicit
when the programmer dereferences the pointer. This assumption is later contradicted when the programmer checks ptr against
NULL. If ptr can be NULL when it is checked in the if statement then it can also be NULL when it dereferenced and may cause
a segmentation fault.
ptr->field = val;
...
if (ptr != NULL) {
...
}
Example 2: In the following code, the programmer confirms that the variable ptr is NULL and subsequently dereferences it
erroneously. If ptr is NULL when it is checked in the if statement, then a null dereference will occur, thereby causing a
segmentation fault.
if (ptr == null) {
ptr->field = val;
...
}
Example 3: In the following code, the programmer forgets that the string '\0' is actually 0 or NULL, thereby dereferencing a null
pointer and causing a segmentation fault.
if (ptr == '\0') {
*ptr = val;
...
}
Example 4: In the following code, the programmer explicitly sets the variable ptr to NULL. Later, the programmer dereferences
ptr before checking the object for a null value.
*ptr = NULL;
Number of Issues
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
Empty passwords may compromise system security in a way that cannot be easily remedied.
Explanation:
It is never a good idea to assign an empty string to a password variable. If the empty password is used to successfully
authenticate against another system, then the corresponding account's security is likely compromised because it accepts an empty
password. If the empty password is merely a placeholder until a legitimate value can be assigned to the variable, then it can
confuse anyone unfamiliar with the code and potentially cause problems on unexpected control flow paths.
Example 1: The code below attempts to connect to a database with an empty password.
...
rc = SQLConnect(*hdbc, server, SQL_NTS, "scott", SQL_NTS, "", SQL_NTS);
...
If the code in Example 1 succeeds, it indicates that the database user account "scott" is configured with an empty password,
which can be easily guessed by an attacker. Even worse, once the program has shipped, updating the account to use a non-empty
password will require a code change.
Example 2: The code below initializes a password variable to an empty string, attempts to read a stored value for the password,
and compares it against a user-supplied value.
...
char *stored_password = "";
readPassword(stored_password);
if(safe_strcmp(stored_password, user_password))
// Access protected resources
...
}
...
If readPassword() fails to retrieve the stored password due to a database error or another problem, then an attacker could trivially
bypass the password check by providing an empty string for user_password.
Recommendations:
Always read stored password values from encrypted, external resources and assign password variables meaningful values.
Ensure that sensitive resources are never protected with empty or null passwords.
Starting with Microsoft(R) Windows(R) 2000, Microsoft(R) provides Windows Data Protection Application Programming
Interface (DPAPI), which is an OS-level service that protects sensitive application data, such as passwords and private keys [1].
Tips:
1. When identifying null, empty, or hardcoded passwords, default rules only consider fields and variables that contain the word
password. However, the Fortify Custom Rules Editor provides the Password Management wizard that makes it easy to create
rules for detecting password management issues on custom-named fields and variables.
Number of Issues
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The function console_write_log() in cli_console.c mishandles confidential information on line 65. The program could
compromise user privacy.
Explanation:
Privacy violations occur when:
1. Private user information enters the program.
2. The data is written to an external location, such as the console, file system, or network.
Example: The following code contains a logging statement that tracks the contents of records added to a database by storing them
in a log file. Among other values that are stored, the get_password() function returns the user-supplied plaintext password
associated with the account.
pass = get_password();
...
fprintf(dbms_log, "%d:%s:%s:%s", id, pass, type, tstamp);
The code in the example above logs a plaintext password to the file system. Although many developers trust the file system as a
safe storage location for any and all data, it should not be trusted implicitly, particularly when privacy is a concern.
Private data can enter a program in a variety of ways:
- Directly from the user in the form of a password or personal information.
- Accessed from a database or other data store by the application.
- Indirectly from a partner or other third party.
Sometimes data that is not labeled as private can have a privacy implication in a different context. For example, student
identification numbers are usually not considered private because there is no explicit and publicly-available mapping to an
individual student's personal information. However, if a school generates student identification based on student social security
numbers, then the identification numbers should be considered private.
Security and privacy concerns often seem to compete with each other. From a security perspective, you should record all
important operations so that any anomalous activity can later be identified. However, when private data is involved, this practice
can create additional risk.
Although there are many ways in which private data can be handled unsafely, a common risk stems from misplaced trust.
Programmers often trust the operating environment in which a program runs, and therefore believe that it is acceptable to store
private information on the file system, in the registry, or in other locally-controlled resources. However, even if access to certain
resources is restricted, it does not guarantee that the individuals who do have access can be trusted with certain data. For
example, in 2004, an unscrupulous employee at AOL sold approximately 92 million private customer e-mail addresses to a
spammer marketing an offshore gambling web site [1].
In response to such high-profile exploits, the collection and management of private data is becoming increasingly regulated.
Depending on its location, the type of business it conducts, and the nature of any private data it handles, an organization may be
required to comply with one or more of the following federal and state regulations:
- Safe Harbor Privacy Framework [3]
- Gramm-Leach Bliley Act (GLBA) [4]
- Health Insurance Portability and Accountability Act (HIPAA) [5]
Number of Issues
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The function default_block_alloc() in pool_policy_malloc.c does not account for integer overflow, which can result in a logic
error or a buffer overflow.
Explanation:
Integer overflow errors occur when a program fails to account for the fact that an arithmetic operation can result in a quantity
either greater than a data type's maximum value or less than its minimum value. These errors often cause problems in memory
allocation functions, where user input intersects with an implicit conversion between signed and unsigned values. If an attacker
can cause the program to under-allocate memory or interpret a signed value as an unsigned value in a memory operation, the
program may be vulnerable to a buffer overflow.
Example 1: The following code excerpt from OpenSSH 3.3 demonstrates a classic case of integer overflow:
nresp = packet_get_int();
if (nresp > 0) {
response = xmalloc(nresp*sizeof(char*));
for (i = 0; i < nresp; i++)
response[i] = packet_get_string(NULL);
}
If nresp has the value 1073741824 and sizeof(char*) has its typical value of 4, then the result of the operation
nresp*sizeof(char*) overflows, and the argument to xmalloc() will be 0. Most malloc() implementations will allow for the
allocation of a 0-byte buffer, causing the subsequent loop iterations to overflow the heap buffer response.
Example 2: This example processes user input comprised of a series of variable-length structures. The first 2 bytes of input
dictate the size of the structure to be processed.
The programmer has set an upper bound on the structure size: if it is larger than 512, the input will not be processed. The
problem is that len is a signed integer, so the check against the maximum structure length is done with signed integers, but len is
converted to an unsigned integer for the call to memcpy(). If len is negative, then it will appear that the structure has an
appropriate size (the if branch will be taken), but the amount of memory copied by memcpy() will be quite large, and the attacker
will be able to overflow the stack with data in strm.
Recommendations:
Number of Issues
0 1 2 3 4 5
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The function base64_test() in encryption.c writes one location past the bounds of output on line 602, which could corrupt data,
cause the program to crash, or lead to the execution of malicious code.
Explanation:
Buffer overflow is probably the best known form of software security vulnerability. Most software developers know what a
buffer overflow vulnerability is, but buffer overflow attacks against both legacy and newly-developed applications are still quite
common. Part of the problem is due to the wide variety of ways buffer overflows can occur, and part is due to the error-prone
techniques often used to prevent them.
In a classic buffer overflow exploit, the attacker sends data to a program, which it stores in an undersized stack buffer. The result
is that information on the call stack is overwritten, including the function's return pointer. The data sets the value of the return
pointer so that when the function returns, it transfers control to malicious code contained in the attacker's data.
Although this type of off-by-one error is still common on some platforms and in some development communities, there are a
variety of other types of buffer overflow, including stack and heap buffer overflows among others. There are a number of
excellent books that provide detailed information on how buffer overflow attacks work, including Building Secure Software [1],
Writing Secure Code [2], and The Shellcoder's Handbook [3].
At the code level, buffer overflow vulnerabilities usually involve the violation of a programmer's assumptions. Many memory
manipulation functions in C and C++ do not perform bounds checking and can easily exceed the allocated bounds of the buffers
they operate upon. Even bounded functions, such as strncpy(), can cause vulnerabilities when used incorrectly. The combination
of memory manipulation and mistaken assumptions about the size or makeup of a piece of data is the root cause of most buffer
overflows.
Example: The following code contains an off-by-one buffer overflow, which occurs when recv returns the maximum allowed
sizeof(buf) bytes read. In this case, the subsequent dereference of buf[nbytes] will write the null byte outside the bounds of
allocated memory.
Number of Issues
0 1 2 3 4 5
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The random number generator implemented by srand() cannot withstand a cryptographic attack.
Explanation:
Insecure randomness errors occur when a function that can produce predictable values is used as a source of randomness in a
security-sensitive context.
Computers are deterministic machines, and as such are unable to produce true randomness. Pseudorandom Number Generators
(PRNGs) approximate randomness algorithmically, starting with a seed from which subsequent values are calculated.
There are two types of PRNGs: statistical and cryptographic. Statistical PRNGs provide useful statistical properties, but their
output is highly predictable and form an easy to reproduce numeric stream that is unsuitable for use in cases where security
depends on generated values being unpredictable. Cryptographic PRNGs address this problem by generating output that is more
difficult to predict. For a value to be cryptographically secure, it must be impossible or highly improbable for an attacker to
distinguish between the generated random value and a truly random value. In general, if a PRNG algorithm is not advertised as
being cryptographically secure, it is probably a statistical PRNG and should not be used in security-sensitive contexts, where its
use can lead to serious vulnerabilities such as easy-to-guess temporary passwords, predictable cryptographic keys, session
hijacking, and DNS spoofing.
Example: The following code uses a statistical PRNG to create a URL for a receipt that remains active for some period of time
after a purchase.
char* CreateReceiptURL() {
int num;
time_t t1;
char *URL = (char*) malloc(MAX_URL);
if (URL) {
(void) time(&t1);
srand48((long) t1); /* use time to set seed */
sprintf(URL, "%s%d%s", "http://test.com/", lrand48(), ".html");
}
return URL;
}
This code uses the lrand48() function to generate "unique" identifiers for the receipt pages it generates. Since lrand48() is a
statistical PRNG, it is easy for an attacker to guess the strings it generates. Although the underlying design of the receipt system
is also faulty, it would be more secure if it used a random number generator that did not produce predictable receipt identifiers.
Recommendations:
When unpredictability is critical, as is the case with most security-sensitive uses of randomness, use a cryptographic PRNG.
Regardless of the PRNG you choose, always use a value with sufficient entropy to seed the algorithm. (Values such as the
current time offer only negligible entropy and should not be used.)
There are various cross-platform solutions for C and C++ programs that offer cryptographically secure PRNGs, such as Yarrow
[1], CryptLib [2], Crypt++ [3], BeeCrypt [4] and OpenSSL [5].
On Windows(R) systems, C and C++ programs can use the CryptGenRandom() function in the CryptoAPI [6]. To avoid the
overhead of pulling in the entire CryptoAPI, access the underlying RtlGenRandom() function directly [7].
Number of Issues
0 1 2 3 4
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The SSLv2, SSLv23, and SSLv3 protocols contain several flaws that make them insecure, so they should not be used to transmit
sensitive data.
Explanation:
The Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols provide a protection mechanism to ensure the
authenticity, confidentiality and integrity of data transmitted between a client and web server. Both TLS and SSL have
undergone revisions resulting in periodic version updates. Each new revision was designed to address the security weaknesses
discovered in the previous versions. Use of an insecure version of TLS/SSL will weaken the strength of the data protection and
could allow an attacker to compromise, steal, or modify sensitive information.
Weak versions of TLS/SSL may exhibit one or more of the following properties:
- No protection against man-in-the-middle attacks
- Same key used for authentication and encryption
- Weak message authentication control
- No protection against TCP connection closing
The presence of these properties may allow an attacker to intercept, modify, or tamper with sensitive data.
Recommendations:
It is highly recommended to force the client to only use the most secure protocols.
Example 1:
c->sslContext = SSL_CTX_new (TLSv1_2_method());
The example above demonstrates how to enforce communication over the TLSv1.2 protocol.
Number of Issues
0 1 2 3
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
An attacker may control the format string argument to vsnprintf() at log.c line 442, allowing an attack much like a buffer
overflow.
Explanation:
Format string vulnerabilities occur when:
1. Data enters the application from an untrusted source.
2. The data is passed as the format string argument to a function like sprintf(), FormatMessageW(), or syslog().
Example 1: The following code copies a command line argument into a buffer using snprintf().
This code allows an attacker to view the contents of the stack and write to the stack using a command line argument containing a
sequence of formatting directives. The attacker may read from the stack by providing more formatting directives, such as %x,
than the function takes as arguments to be formatted. (In this example, the function takes no arguments to be formatted.) By
using the %n formatting directive, the attacker may write to the stack, causing snprintf() to write the number of bytes output thus
far to the specified argument (rather than reading a value from the argument, which is the intended behavior). A sophisticated
version of this attack will use four staggered writes to completely control the value of a pointer on the stack.
Example 2: Certain implementations make more advanced attacks even easier by providing format directives that control the
location in memory to read from or write to. An example of these directives is shown in the following code, written for glibc:
5955
It is also possible to use half-writes (%hn) to accurately control arbitrary DWORDS in memory, which greatly reduces the
complexity needed to execute an attack that would otherwise require four staggered writes, such as the one mentioned in
Example 1.
Example 3: Simple format string vulnerabilities often result from seemingly innocuous shortcuts. The use of some such shortcuts
is so ingrained that programmers might not even realize that the function they are using expects a format string argument.
For example, the syslog() function is sometimes used as follows:
...
syslog(LOG_ERR, cmdBuf);
...
Because the second parameter to syslog() is a format string, any formatting directives included in cmdBuf are interpreted as
described in Example 1.
...
syslog(LOG_ERR, "%s", cmdBuf);
...
Recommendations:
Whenever possible, pass static format strings to functions that accept a format string argument. If format strings must be
constructed dynamically, define a set of valid format strings and make selections from this safe set. Finally, always verify that
the number of formatting directives in the selected format string corresponds to the number of arguments to be formatted.
Number of Issues
0 1 2 3
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The window of time between the call to <a href="location://pjlib/src/pj/file_access_unistd.c###49###0###0">pj_file_size()</a>
and <a href="location://pjlib/src/pj/file_io_ansi.c###26###0###0">pj_file_open()</a> can be exploited to launch a privilege
escalation attack.
Explanation:
File access race conditions, known as time-of-check, time-of-use (TOCTOU) race conditions, occur when:
1. The program checks a property of a file, referencing the file by name.
2. The program later performs a file system operation using the same filename and assumes that the previously-checked property
still holds.
Example 1: The following code is from a program installed setuid root. The program performs certain file operations on behalf of
non-privileged users, and uses access checks to ensure that it does not use its root privileges to perform operations that should
otherwise be unavailable the current user. The program uses the access() system call to check if the person running the program
has permission to access the specified file before it opens the file and performs the necessary operations.
if (!access(file,W_OK)) {
f = fopen(file,"w+");
operate(f);
...
}
else {
fprintf(stderr,"Unable to open file %s.\n",file);
}
The call to access() behaves as expected, and returns 0 if the user running the program has the necessary permissions to write to
the file, and -1 otherwise. However, because both access() and fopen() operate on filenames rather than on file handles, there is
no guarantee that the file variable still refers to the same file on disk when it is passed to fopen() that it did when it was passed to
access(). If an attacker replaces file after the call to access() with a symbolic link to a different file, the program will use its root
privileges to operate on the file even if it is a file that the attacker would otherwise be unable to modify. By tricking the program
into performing an operation that would otherwise be impermissible, the attacker has gained elevated privileges.
This type of vulnerability is not limited to programs with root privileges. If the application is capable of performing any
operation that the attacker would not otherwise be allowed perform, then it is a possible target.
The window of vulnerability for such an attack is the period of time between when the property is tested and when the file is
used. Even if the use immediately follows the check, modern operating systems offer no guarantee about the amount of code that
will be executed before the process yields the CPU. Attackers have a variety of techniques for expanding the length of the
window of opportunity in order to make exploits easier, but even with a small window, an exploit attempt can simply be repeated
over and over until it is successful.
Example 2: The following code creates a file and then changes the owner of the file.
The code assumes that the file operated upon by the call to chown() is the same as the file created by the call to creat(), but that is
not necessarily the case. Since chown() operates on a file name and not on a file handle, an attacker may be able to replace the
file with a link to file the attacker does not own. The call to chown() would then give the attacker ownership of the linked file.
Recommendations:
To prevent file access race conditions, you must ensure that a file cannot be replaced or modified once the program has begun a
series of operations on it. Avoid functions that operate on filenames, since they are not guaranteed to refer to the same file on
disk outside of the scope of a single function call. Open the file first and then use functions that operate on file handles rather
than filenames.
The most effective way to check file access permissions is to drop to the privilege of the current user and attempt to open the file
with those reduced privileges. If the file open succeeds, additional access checks can be performed atomically using the resulting
file handle. If the file open fails, then the user does not have access to the file and the operation should be aborted. By dropping
to the user's privilege before attempting a series of file operations, the program cannot be easily tricked by changes to the
underlying file system.
Tips:
1. Be careful, a race condition can still exist after the file is opened if later operations depend on a property that was checked
before the file was opened. For example, if a stat structure is populated before a file is opened, and then a later decision about
whether to operate on the file is based on a value read from the stat structure, the file could be modified prior to being opened,
rendering the stat information stale. Always verify that file operations are performed on open file handles rather than filenames.
2. Some file system APIs do not have alternatives that operate on file handles. For example, there is no way to delete a file via a
file handle using standard C functions. Thus, some race conditions can only be avoided by placing the file in a directory path that
is completely under the control of the program. If this mitigation is taken, then otherwise unsafe system calls can be used safely.
Number of Issues
0 1 2 3
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
Initialization vectors should be created using a cryptographic pseudorandom number generator.
Explanation:
Initialization vectors (IVs) should be created using a cryptographic pseudorandom number generator. Not using a random IV
makes the resulting ciphertext much more predictable and susceptible to a dictionary attack.
Example 1: The following code creates a non-random IV using a hardcoded string.
Number of Issues
0 1 2
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
Hardcoded passwords may compromise system security in a way that cannot be easily remedied.
Explanation:
It is never a good idea to hardcode a password. Not only does hardcoding a password allow all of the project's developers to view
the password, it also makes fixing the problem extremely difficult. Once the code is in production, the password cannot be
changed without patching the software. If the account protected by the password is compromised, the owners of the system will
be forced to choose between security and availability.
Example: The following code uses a hardcoded password to connect to a database:
...
rc = SQLConnect(*hdbc, server, SQL_NTS, "scott",
SQL_NTS, "tiger", SQL_NTS);
...
This code will run successfully, but anyone who has access to it will have access to the password. Once the program has shipped,
there is likely no way to change the database user "scott" with a password of "tiger" unless the program is patched. An employee
with access to this information could use it to break into the system. Even worse, if attackers have access to the executable for
the application they can disassemble the code, which will contain the values of the passwords used.
Recommendations:
Passwords should never be hardcoded and should generally be obfuscated and managed in an external source. Storing passwords
in plaintext anywhere on the system allows anyone with sufficient permissions to read and potentially misuse the password.
Starting with Microsoft(R) Windows(R) 2000, Microsoft(R) provides Windows Data Protection Application Programming
Interface (DPAPI), which is an OS-level service that protects sensitive application data, such as passwords and private keys [1].
Tips:
1. When identifying null, empty, or hardcoded passwords, default rules only consider fields and variables that contain the word
password. However, the Fortify Custom Rules Editor provides the Password Management wizard that makes it easy to create
rules for detecting password management issues on custom-named fields and variables.
Number of Issues
0 1 2
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The function pj_thread_create() in os_core_unix.c fails to release a lock it acquires on line 610, which might lead to deadlock.
Explanation:
The program can potentially fail to release a system resource.
Resource leaks have at least two common causes:
- Error conditions and other exceptional circumstances.
- Confusion over which part of the program is responsible for releasing the resource.
Most unreleased resource issues result in general software reliability problems, but if an attacker can intentionally trigger a
resource leak, the attacker may be able to launch a denial of service by depleting the resource pool.
Example: The following function does destroy the condition variable it allocates if an error occurs. If the process is long-lived,
the process can run out of file handles.
Number of Issues
0 1
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The function pj_srand() in rand.c is passed a tainted value for the seed. Functions that generate random or pseudorandom values,
which are passed a seed, should not be called with a tainted argument.
Explanation:
Functions that generate random or pseudorandom values (such as rand()), which are passed a seed (such as srand()); should not
be called with a tainted argument. Doing so allows an attacker to control the value used to seed the pseudorandom number
generator, and therefore predict the sequence of values (usually integers) produced by calls to the pseudorandom number
generator.
Recommendations:
Use a cryptographic PRNG seeded with hardware-based sources of randomness, such as ring oscillators, disk drive timing,
thermal noise, or radioactive decay, for instance, in Unix-like platforms, use /dev/random if you require a high entropy-seeded
pseudorandom number generator. Doing so makes the sequence of data produced by rand() and similar methods much harder to
predict.
Number of Issues
0 1
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The random or pseudorandom number generator implemented by srand() relies on a weak entropy source.
Explanation:
The lack of a proper source of entropy for use by the random or pseudorandom number generator may lead to denial of service or
generation of predictable sequences of numbers. If the random or pseudorandom number generator uses the source of entropy
that runs out, the program may pause or even crash, leading to denial of service. Alternatively, the random or pseudorandom
number generator may produce predictable numbers. A weak source of random or pseudorandom numbers may lead to
vulnerabilities such as easy-to-guess temporary passwords, predictable cryptographic keys, session hijacking, and DNS spoofing.
Example 1: The following code uses the system clock as the entropy source:
...
srand (time(NULL));
r = (rand() % 6) + 1;
...
Since system clock generates predictable values, it is not a good source of entropy. Same goes for other non-hardware-based
sources of randomness, including system/input/output buffers, user/system/hardware/network serial numbers or addresses, as
well as user input.
Recommendations:
Avoid using non-hardware-based sources of randomness. Whenever possible, use hardware-based sources of randomess, such as
ring oscillators, disk drive timing, thermal noise, or radioactive decay.
On Unix-like platforms, the character special files /dev/random and /dev/urandom (present since Linux 1.3.30) provide an
interface to the kernel's random number generator. The random number generator gathers environmental noise from device
drivers and other sources into an entropy pool. When the entropy pool is empty, reads from /dev/random will block until
additional environmental noise is gathered. However, reads from /dev/urandom will not block waiting for more entropy. As a
result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic
attack on the algorithms used by the driver. Always favor /dev/random over /dev/urandom.
Number of Issues
0 1
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The function fromPj() in siptypes.cpp allocates memory on line 376 and fails to free it.
Explanation:
Memory leaks have two common and sometimes overlapping causes:
- Error conditions and other exceptional circumstances.
- Confusion over which part of the program is responsible for freeing the memory.
Most memory leaks result in general software reliability problems, but if an attacker can intentionally trigger a memory leak, the
attacker may be able to launch a denial of service attack (by crashing the program) or take advantage of other unexpected
program behavior resulting from a low memory condition [1].
Example 1: The following C function leaks a block of allocated memory if the call to read() fails to return the expected number
of bytes:
Number of Issues
0 1
<Unaudited>
Not an Issue
Reliability Issue
Bad Practice
Suspicious
Exploitable
Abstract:
The information returned by the call to gethostbyname() is not trustworthy. Attackers may spoof DNS entries. Do not rely on
DNS for security.
Explanation:
Many DNS servers are susceptible to spoofing attacks, so you should assume that your software will someday run in an
environment with a compromised DNS server. If attackers are allowed to make DNS updates (sometimes called DNS cache
poisoning), they can route your network traffic through their machines or make it appear as if their IP addresses are part of your
domain. Do not base the security of your system on DNS names.
Example 1: The following code uses a DNS lookup to determine whether or not an inbound request is from a trusted host. If an
attacker can poison the DNS cache, they can gain trusted status.
IP addresses are more reliable than DNS names, but they can also be spoofed. Attackers may easily forge the source IP address
of the packets they send, but response packets will return to the forged IP address. To see the response packets, the attacker has
to sniff the traffic between the victim machine and the forged IP address. In order to accomplish the required sniffing, attackers
typically attempt to locate themselves on the same subnet as the victim machine. Attackers may be able to circumvent this
requirement by using source routing, but source routing is disabled across much of the Internet today. In summary, IP address
verification can be a useful part of an authentication scheme, but it should not be the single factor required for authentication.
Recommendations:
You can increase confidence in a domain name lookup if you check to make sure that the host's forward and backward DNS
entries match. Attackers will not be able to spoof both the forward and the reverse DNS entries without controlling the
nameservers for the target domain. However, this is not a foolproof approach: attackers may be able to convince the domain
registrar to turn over the domain to a malicious nameserver. Basing authentication on DNS entries is simply a risky practice.
While no authentication mechanism is foolproof, there are better alternatives than host-based authentication. Password systems
offer decent security, but are susceptible to bad password choices, insecure password transmission, and bad password
management. A cryptographic scheme like SSL is worth considering, but such schemes are often so complex that they bring with
them the risk of significant implementation errors, and key material can always be stolen. In many situations, multi-factor
authentication including a physical token offers the most security available at a reasonable price.
Tips:
1. Check how the DNS information is being used. In addition to considering whether or not the program's authentication
mechanisms can be defeated, consider how DNS spoofing can be used in a social engineering attack. For example, if attackers
can make it appear that a posting came from an internal machine, can they gain credibility?
Exploitable: (1,
0%)
<none>: (2,801,
100%)
Exploitable <none>