0% found this document useful (0 votes)
6 views14 pages

Researchof Dynamic Fuzzing Methods

Uploaded by

Trần Tú
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views14 pages

Researchof Dynamic Fuzzing Methods

Uploaded by

Trần Tú
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/384959054

Research of Dynamic Fuzzing Methods to Identify Vulnerabilities in Program


Code

Chapter · October 2024


DOI: 10.1007/978-3-031-72171-7_18

CITATIONS READS

0 10

4 authors, including:

Maria Lapina Vijay Anant Athavale


North Caucasus Federal University Walchand Institute of Technology Solapur Maharashtra India
61 PUBLICATIONS 127 CITATIONS 75 PUBLICATIONS 311 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Maria Lapina on 23 October 2024.

The user has requested enhancement of the downloaded file.


Research of Dynamic Fuzzing Methods
to Identify Vulnerabilities in Program Code

Maria Lapina1 , Arsen Aganesov1(B) , Vitalii Lapin1 ,


and Vijay Anant Athavale2
1 North-Caucasus Federal University, Stavropol, Russia
[email protected]
2 Walchand Institute of Technology, Solapur, India

Abstract. The presented article presents a study of dynamic fuzzing methods


for detecting vulnerabilities in program code. At the beginning of the article the
classification of program code from the point of view of fuzzing is considered,
the principles by which the code can be attributed to one or another group are
described, which contributes to more effective vulnerability detection. Then the
article discusses the application of fuzzing. The object of fuzzing is the kernel of
the Linux operating system. Syzkaller fuzzing software environment is used as
the main tool. According to the results of kernel analysis the errors obtained are
described and classified by criticality. Application of dynamic fuzzing methods
helps to increase vulnerability resistance of both critical components and the whole
system.

Keywords: dynamic fuzzing · information security · software code


vulnerabilities · classification of code · testing

1 Introducing
In the rapidly evolving field of software development, software code security is becoming
paramount. As software becomes an increasingly integral part of day-to-day operations in
various industries, identifying and addressing vulnerabilities in software code is critical
to ensuring reliability and security.
Along with the evolving field of software development, the methods and ways in
which vulnerabilities are exploited by hackers are evolving. This requires the develop-
ment of new methods for debugging software for vulnerabilities. Thus, in the 1990s
the concept of fuzzing appears in the paper “The Fuzz Generator” published by Barton
Miller [1]. In his work, the author presented a program “fuzzer”, the purpose of which
was to feed the program with random and incorrect data to search for situations when it
will not be able to process them. The efficiency of the method was and still is at a high
level, as it is possible not only to study the behavior of the program at the moment of its
direct execution, but also to determine the type of input data in advance.
The Linux kernel [2], which is the link between the computer hardware and its
processes, represents the basis for many computing systems, from servers to personal

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024


M. Lapina et al. (Eds.): AISMA 2024, LNNS 863, pp. 172–184, 2024.
https://doi.org/10.1007/978-3-031-72171-7_18
Research of Dynamic Fuzzing Methods to Identify Vulnerabilities in Program 173

computers. Given the importance of its role and its wide distribution, ensuring its relia-
bility and security is critical. One of the priority tasks in kernel development is to improve
the quality of program code, which directly affects the performance and security of the
entire system. Solving this task requires reliable testing tools capable of detecting errors
and vulnerabilities. This article discusses the use of Syzkaller, a well-known open-source
fuzzing testing system that uses system calls as input data.
Syzkaller has become one of the most popular tools used to find vulnerabilities
in the Linux kernel, mainly due to its ability to automatically generate input data and
fine-tune the configuration of the fuzzing process, identifying hard-to-detect bugs and
vulnerabilities. Syzkaller helps detect critical vulnerabilities in software that may go
undetected in traditional testing phases such as static analysis. This allows you to increase
kernel reliability and provide a more systematic approach to ensuring the quality of the
software under test.
Regular analysis of program code with the help of such tools as Syzkaller contributes
to earlier detection of vulnerabilities and as a consequence significantly saves time
and resources that could be spent on eliminating these vulnerabilities at later stages of
development.
The aim of the study is to classify program code from the point of view of fuzzing
and analyze the application of Syzkaller on the Linux kernel, study the obtained result
and classify the found errors by criticality level.

2 Classification of Program Code in Terms of Fuzzing


Fuzzing [3] is a method of dynamic code testing based on automatically feeding random
and often unacceptable data to the program input to detect vulnerabilities and errors.
This method is critical for software testing because it allows you to detect errors that
traditional testing methods may fail to detect.
Traditional testing methods refer to techniques such as unit testing, integration test-
ing, and system testing, which often rely on a set of predefined input data and test cases
that are meant to cover possible code execution paths. Such methods require testers to
envision potential points of failure and develop tests that can check for these condi-
tions. This approach is effective in most scenarios, but may miss unexpected or unusual
paths not covered by the original test cases. Among other things, traditional methods
often require a deep understanding of the software organization to create effective tests,
making them less applicable to complex systems with extensive code bases.
The main difference between fuzzing and traditional methods is that it does not
depend on predetermined input data, but self-generates random or distorted data in
order to test the software’s response to the fed data. This randomness of the data feed
allows fuzzing to identify problems that are outside the scope of conventional test cases,
including, for example, vulnerabilities such as buffer overflows, memory leaks, and
more. For complex and extensive code bases like Linux, fuzzing is an effective approach
to improving security and reliability.
174 M. Lapina et al.

2.1 Principles of Classification


In the field of software testing, in particular fuzzing, code classification is key. Different
types of code react differently to fuzzing methods, and understanding these nuances and
knowing how to apply these methods can significantly increase the efficiency of fuzzing
testing. Code classification will help testers choose the right fuzzing strategy and tool,
make adjustments to the fuzzing process to maximize efficiency, and predict potential
problems in the testing process (see Fig. 1).

Fuzzing

On information
As required in By operations
on program By availability
source code on input data
structure

Smart source-based feedback-driven Generation

not feedback
Stupid binary-based Mutation
driven

Fig. 1. Types of fuzzing

• On information on program structure


Stupid fuzzing consists of “black box” fuzzers who have no information about the
format and structure of the target’s input data. They only observe the I/O behavior of the
target program, treating it as a “black box” into which they cannot look. In such testing,
the fuzzers do not know how the input data is processed by the program. This method
has become widespread among hackers due to its ease of use and versatility as it does
not require access to the source code [3].
Two types of fuzzers belong to smart fuzzing:
White-box fuzzers are the opposite of black-box fuzzers and have all the information
they need about the input data, due to analyzing the structure of the code and the infor-
mation they collect during its execution. This type of fuzzing is slower than black-box
fuzzing but more efficient in terms of finding bugs and vulnerabilities.
Gray-box fuzzers are something in between the previous types of fuzzers. These
fuzzers have incomplete information about the data format, unlike black-box fuzzers,
they can collect some information inside the program under investigation to evaluate its
structure and execution. This type of fuzzers seeks to find an effective balance between
speed of execution, ease of use, and providing broad coverage [3].
Research of Dynamic Fuzzing Methods to Identify Vulnerabilities in Program 175

• As required in source code


Compiler-based fuzzers require access to the source code. They include a special
compiler for the target programming language, which adds lightweight tools to the target
program during compilation that collect coverage data during fuzzing.
Fuzzers working with binary code are used when source code is not available. Many
fuzzers use only binary code, especially for researching proprietary or commercial
software. Such fuzzers are limited to binary tools [4].
• By availability of feedback
Feedback-driven: feedback fuzzers can adjust inputs for better efficiency and focus.
They gather so-called “coverage”, which allows them to go deeper into the program and
touch parts of the code that were previously untouched. Feedback also helps identify the
most fruitful areas of research [5].
Not feedback driven fuzzing also referred to as “traditional fuzzing” does not process
feedback from the application, but constantly feeds random data [6].
• By operations on input data
Generation: fuzzers using this approach create test cases based on the provided input
data structure and each time generates completely new data corresponding to the correct
ones according to the structure [7].
Mutation-based: such fuzzers use feedback from the application to mutate already
used input data if it allows penetrating the part of the program that the previous data
could not cover [8].

2.2 Fuzzing Comparison Table

Code classifications that are important when applying fuzzing were discussed earlier,
including aspects such as the need for source code, feedback from the application, and
the availability of information about the structure of the program as a whole. This link
between theory and practice will be illustrated through detailed comparison tables that
address specific problems that can be encountered when conducting fuzzing testing of
application and system software (Table 1).

Table 1. Security and vulnerabilities

Feature Application Software System Software


Exposure to low-level security vulnerabilities No Yes
Uses obfuscation techniques Yes No
Requires specialized fuzzers No Yes
Affected by code maturity No Yes
Receives regular updates and patches Yes No
176 M. Lapina et al.

Exposure to low-level vulnerabilities. System software directly manages system


operations and communicates with hardware, which creates the risk of low-level vul-
nerabilities such as kernel exploits, buffer overflows, and memory corruption. Fuzzing
systematically checks how software processes the input it is given, making it a powerful
tool for identifying memory-related vulnerabilities. Application software usually faces
higher-level security issues [9].
Use of obfuscation techniques. Obfuscation is a process of obfuscating a program’s
code in order to complicate its analysis and understanding of its algorithms. Obfuscators
are programs that perform obfuscation. Application software often uses obfuscation as
an intellectual property protection and an obstacle to reverse engineering, which makes
it relevant to consider how it can complicate the fuzzing process [10].
Requires specialized fuzzers. The need for specialized fuzzers arises because of
the complexity of the software under test and specific requirements. Such fuzzers are
required for testing complex systems such as operating systems, embedded applications,
or applications with unique operating protocols. Such fuzzers are especially useful for
complex and mission-critical systems where generic fuzzers may not be effective [11].
Impact of code maturity. Maturity impact refers to the effect of age and stage of
development on vulnerability susceptibility. Mature software that has been subject to
regular testing and updates over time, with many vulnerabilities fixed, may still have
bugs discovered but not fixed. It benefits from targeted fuzzing of these known vulnera-
bilities. At the same time, newer software that has not been extensively tested contains
undiscovered vulnerabilities. For these cases, it is useful to use broad, varied fuzzing to
find and fix vulnerabilities. This distinction is important for building a fuzzing strategy
for specific software at different stages of its lifecycle [12].
Receive regular updates and patches. Application software is updated more fre-
quently, which can result in both fixing existing vulnerabilities and introducing new ones,
which directly affects stability and security. Receiving regular updates requires equally
regular fuzzing to ensure that no new vulnerabilities are introduced [13] (Table 2).
Complex memory management. System software handles low-level operations such
as allocating and freeing memory. Errors in memory management lead to vulnerabilities
such as buffer overflows and memory leaks. Fuzzing can effectively identify hidden
problems in memory operations [14].
Concurrency problems. Parallelism problems occur when multiple processes or
threads run concurrently, sometimes leading to race conditions or deadlocks [15]. These
problems are especially common in system software that can perform multiple tasks
simultaneously. The use of fuzzing can identify concurrency problems, preventing

Table 2. Technical and operational aspects

Feature Application Software System Software


Complex memory management No Yes
Parallelism issues No Yes
Vulnerability to input validation errors Yes Yes
Research of Dynamic Fuzzing Methods to Identify Vulnerabilities in Program 177

unpredictable behavior outside of testing, which undoubtedly has high relevance to


the performance and reliability of the overall system.
Vulnerability to input validation errors. Input validation errors are common in appli-
cation and system software, improper input processing can lead to vulnerabilities such as
SQL injection and buffer overflows. Fuzzing allows you to effectively test the reliability
of input validation functions by sending random data to see how the program reacts
when processing it. This helps to identify weaknesses in the software [16] (Table 3).

Table 3. Interface and compliance

Feature Application Software System software


Easy setup for fuzzing Yes No
Depends on external libraries Yes No
Often works with user input Yes No

Simple configuration for fuzzing. Application software is organized in a simpler way


and has fewer dependencies, making it quick and easy to prepare for testing compared
to system software, which may require complex fuzzing configuration [17].
Dependencies on external libraries. Application software often utilizes third-party
libraries and frameworks to speed up the development process and quickly integrate
the functionality they offer. Lack of proper debugging and support for these libraries
can create potential risks. Fuzzing allows you to detect errors that external libraries and
frameworks contain, which indicates the need to carefully assess the safety of their use
[18].
Working with user input. Attacks via user input are one of the most common types.
Application software usually processes a large amount of data that is entered by the user,
which creates the risk of an attack in this way. Fuzzing such inputs allows you to simulate
many scenarios of user behavior to ensure that the input data is properly validated [19].

3 Applying the Syzkaller Tool in the Linux Kernel


The research applied Syzkaller [20], which is an open-source fuzzing software envi-
ronment that allows searching for bugs not only in the Linux kernel, but also in such
operating systems as Akaros, FreeBSD, Fuchsia, NetBSD, and Windows. As a fuzzer,
Syzkaller automatically generates and tests thousands of random program inputs, via
system calls, into the kernel to detect bugs and vulnerabilities that could lead to crashes,
memory leaks or other dangerous consequences. Let’s classify Syzkaller as a fuzzer for
a deeper understanding of its capabilities:
In terms of information about the program structure, a fuzzer is classified as a “gray
box”, as it works with partial knowledge of the kernel internals, thanks to the built-in
KCOV kernel coverage collection tool, which helps it understand which parts of the
code fall under testing.
178 M. Lapina et al.

As far as source code is concerned, Syzkaller refers to compiler-based fuzzers and


requires the kernel to be compiled with a certain configuration, which plays an important
role in its fuzzing process.
By the presence of feedback from the program under study, the fuzzer uses feedback
from the kernel (code coverage and failure reports) to correct the fuzzing process, making
it more efficient.
In terms of operations on the input data, most of the input data is created by mutation,
since thanks to the KCOV tool, the fuzzer is able to collect coverage and corpus, making
it possible to mutate “interesting” input data in order to penetrate deeper into the code.
This classification gives insight into the workings of Syzkaller which, requires source
code, works with feedback and uses input mutation techniques. Using the “gray box”
method allows it to be more efficient than the “black box”, adapting its work based
on some information about the structure of the software under investigation, and the
difference from the “white box” is a higher speed of execution. The interface of the tool
is intuitive and contains a lot of information about the test in progress (see Fig. 2).

Fig. 2. The interface of the tool

The interface provides detailed information about the testing process, allowing you to
observe such parameters as number of virtual machines, assembled enclosure, coverage
volume, total uptime, number of crashes and types, as well as the number of system calls
and the total uptime log. It is also possible to examine the report for each crash found in
Research of Dynamic Fuzzing Methods to Identify Vulnerabilities in Program 179

detail. Syzkaller also tries to recreate the crash found to better understand its origin, but
this is not always possible.
Within the framework of the research, we analyzed the Linux kernel version 5.0 and
found more than 30 errors during 6 h of testing. We will analyze several errors, classify
them by severity and priority of fixing, and find out the reason for their occurrence.

3.1 Severity and Priority in Software Testing

Today, the vast majority of developers use a severity/priority classification system [21]
that helps to efficiently fix software defects. These classifications help to triage problems
and prioritize their fixes to maintain software quality and operability. Defect criticality
is the importance of the impact of a particular defect on the development or functioning
of a component or system. A factor such as the possibility of data loss or corruption is
also considered while assessing severity. The levels of severity are discussed below (see
Fig. 3).

Severity

Maximum High Moderate Minor Cosmetic

Fig. 3. The levels of severity

• Maximum level

The presence of vulnerabilities of this level leads to complete system failure or data
loss. Example: system failure, security breach.
• High level
Causes tangible inconvenience to many users, performance degradation, but the
system remains functional. Example: paste from clipboard does not work, hotkeys do
not work.
• Moderate level
Affects some functions or performance, but the basic functionality works properly.
Example: dialogue box does not close after pressing a button.
• Minor level
Rarely detected by a small percentage of users and has little or no effect on their
work. Example: typos, formatting or user interface problems.
• Cosmetic level
180 M. Lapina et al.

Affects only the appearance or user interface, but not its operation. Example:
problems with paragraphs or fonts.
Priority is used to prioritize the defect and also has 3 levels of gradation shown in
the figure (see Fig. 4).

Priority

Highest Medium Low

Fig. 4. The levels of gradation

• Highest (ASAP-As Soon As Possible)


The fix needs to be completed in the shortest possible time. Can range from “in
the next update” to “in the next minutes”. Example: an attacker can manipulate SQL
injection to gain access to an application without valid credentials.
• Medium
Indicates that the defect should be corrected as soon as possible because it is either
already interfering with operations or will start interfering in the near future. Example:
Totals in financial reports are displayed incorrectly due to a calculation error.
• Low
Corrected last, after correcting defects of higher levels or according to the work
schedule. Example: spelling error in the settings menu.

3.2 Overview of Errors Found by the Syzkaller Tool


Let’s consider some errors and give them a criticality and priority classification to
understand the order of their correction (see Table 4).
The following conclusions can be drawn from the analysis of the detected errors in the
Linux kernel. The study revealed several critical and major issues that have a significant
impact on the stability and security of the system. These bugs include WARNING level
warnings as well as critical bugs such as SLAB-OUT-OF-BOUNDS, BUFFER.C and
USE-AFTER-FREE. The block-level I/O errors and memory issues identified during the
analysis indicate the need for immediate intervention to prevent possible data corruption
and ensure overall system reliability. Importantly, each of the detected warnings and
errors requires priority attention for remediation to minimize the risk of serious conse-
quences such as system failures and security vulnerabilities. The analysis also highlights
the importance of using effective tools to identify and fix critical vulnerabilities in system
software. Maintaining a high level of security and stability in the Linux kernel requires
constant monitoring and quick response to detected problems. This ensures reliable and
stable operation of the system, protecting it from potential threats and failures.
Research of Dynamic Fuzzing Methods to Identify Vulnerabilities in Program 181

Table 4. Overview of errors

Module(device) Severity Priority Impact


generic_make_request_checks Level: Moderate Level: High The block layer is
A warning indicates Ensuring the the main
problems with reliability of component of the
processing I/O block, I/O Linux kernel that
requests that could operations is handles all I/O
result in data critical to the requests to block
corruption or loss overall reliability devices
of the system
SLAB-OUT-OF-BOUNDS Level: Max Level: ASAP Writes to memory
The error is related Memory outside the
to writing to memory corruption errors boundaries can
outside of the require urgent corrupt
boundaries, which resolution to neighboring
can lead to serious maintain system memory areas,
consequences such stability and resulting in
as memory security unpredictable
corruption, crashes, behavior, failures,
and potential and possible data
security loss
vulnerabilities
BUFFER.C Level: Critical Level: ASAP Kernel BUG ()
The kernel BUG Kernel errors of calls are serious
macro is called, this criticality indicators of
which is a critical level require critical failures
failure in the kernel. immediate action that require
This indicates a to ensure system immediate
serious error that stability and attention to
jeopardizes the prevent data loss maintain system
stability and stability
reliability of the
system. Such errors
can cause the entire
system to crash
USE-AFTER-FREE Level: Critical. Level: ASAP Can cause
These errors involve Such errors can unpredictable
accessing freed compromise the behavior,
memory, leading to stability and including system
undefined behavior, security of the crashes, data
system crashes, and system, so they corruption, and
security should be potential security
vulnerabilities resolved as soon exploits
as possible
182 M. Lapina et al.

4 Conclusion

This paper is devoted to the key aspects of program code analysis, methods of vulner-
ability detection and classification. Code analysis technologies are relevant for cyber-
physical systems, for systems based on the Internet of Things, as well as for systems of
critical information infrastructure [22–25]. The paper consists of two chapters, each of
which covers different aspects of code analysis and application of specialized tools to
improve software security and reliability.
In the first chapter different approaches to fuzzing testing are considered: “black
box”, “white box” and “gray box” methods. Classification of program code from the
point of view of fuzzing allows you to determine the most vulnerable software sections
that require thorough testing.
Section 2 analyzes the practical application of the Syzkaller tool for finding vulnera-
bilities in the Linux kernel. Syzkaller is a powerful open-source fuzzing tool that allows
to identify bugs and vulnerabilities in various OS. The Linux kernel version 5.0 investi-
gation found more than 30 bugs in 6 h of testing. The tool demonstrated its effectiveness
in identifying critical vulnerabilities such as SLAB-OUT-OF-BOUNDS, BUFFER.C
and USE-AFTER-FREE. These bugs were ranked in terms of criticality and priority,
allowing developers to focus their efforts on fixing them immediately.
This study confirms the importance of using a variety of methods for analyzing
program code to ensure its quality and safety. Code classification and application of
fuzzing testing using tools such as Syzkaller allow to effectively detect vulnerabilities
and errors that may be missed during traditional testing.
In conclusion, a comprehensive approach to analyzing software code, including code
classification and fuzzing testing, is key to creating reliable and secure software. The
use of tools like Syzkaller has shown its effectiveness in finding complex errors, making
them indispensable for developers and software quality assurance engineers.

References
1. Miller, B.P., Fredriksen, L., So, B.: An empirical study of the reliability of UNIX utilities.
Commun. ACM 33(12), 32–44 (1990). https://doi.org/10.1145/96267.96279
2. Tanenbaum, A.S., Bos, H.: Modern operating system, 4th edn. Vrije Universiteit Amsterdam,
The Netherlands, Pearson Education (2023)
3. Manès, V., et al.: The art, science, and engineering of fuzzing: a survey. ACM Comput. Surv.
51(3), Article 65 (2019). https://doi.org/10.1145/3197978
4. Nagy, S., et al.: Breaking through binaries: compiler-quality instrumentation for better binary-
only fuzzing. In: 30th Usenix Security Symposium, August 2021
5. Fioraldi, A.: Program state abstraction for feedback-driven fuzz testing using likely invari-
ants (2020). https://www.researchgate.net/publication/347534809_Program_State_Abstrac
tion_for_Feedback-Driven_Fuzz_Testing_using_Likely_Invariants
6. Godefroid, P.: Automated vulnerability analysis using advanced fuzzing: generation based
and evolutionary fuzzers (2016). https://www.researchgate.net/publication/309464490_Aut
omated_Vulnerability_Analysis_using_Advanced_Fuzzing_Generation_Based_and_Evolut
ionary_Fuzzers
Research of Dynamic Fuzzing Methods to Identify Vulnerabilities in Program 183

7. Li, Y., et al.: Generation-based fuzzing? Don’t build a new generator, reuse!
(2023). https://www.researchgate.net/publication/369273562_Generation-based_fuzzing_D
on’t_build_a_new_generator_reuse
8. Verwer, S.: Complementing model learning with mutation-based fuzzing (2016). https://
www.researchgate.net/publication/309766175_Complementing_Model_Learning_with_
Mutation-Based_Fuzzing
9. Bhatkar, S., et al.: Mitigations for low-level coding vulnerabilities: incomparability and limita-
tions. https://www.researchgate.net/publication/242353098_Mitigations_for_low-level_cod
ing_vulnerabilities_incomparability_and_limitations
10. Collberg, C., Thomborson, C., Low, D.: A taxonomy of obfuscating transformations. Tech-
nical report, Department of Computer Science, University of Auckland, New Zealand
(1997)
11. Kolozsvari, B., et al.: Fuzzing the Internet of Things: a review on the techniques and
challenges for efficient vulnerability discovery in embedded systems (2021). https://www.
researchgate.net/publication/349001489_Fuzzing_the_Internet_of_Things_A_Review_on_
the_Techniques_and_Challenges_for_Efficient_Vulnerability_Discovery_in_Embedded_
Systems
12. Kawamoto, D., et al.: Is vulnerability report confidence redundant? pitfalls using temporal
risk scores (2024). https://www.researchgate.net/publication/353037856_Is_Vulnerability_R
eport_Confidence_Redundant_Pitfalls_Using_Temporal_Risk_Scores
13. Nabavi, S., et al.: The role of security updates and patches in addressing cyber security threats
and vulnerabilities: a case study of recent cyber security attacks (2022). https://www.resear
chgate.net/publication/357826952_The_Role_of_Security_Updates_and_Patches_in_Addr
essing_Cyber_Security_Threats_and_Vulnerabilities_A_Case_Study_of_Recent_Cyber_
Security_Attacks
14. Orso, A., et al.: Memory allocation vulnerability analysis and analysis optimization for
C programs based on formal methods. https://www.researchgate.net/publication/283196
026_Memory_Allocation_Vulnerability_Analysis_and_Analysis_Optimization_for_C_P
rograms_Based_on_Formal_Methods
15. Paltrow, S., Brown, J., Harris, M.: Software engineering challenges for parallel processing
systems (2017). https://apps.dtic.mil/sti/pdfs/AD1015670.pdf
16. Xie, T., et al.: A validation model of data input for web services. https://www.researchgate.net/
publication/256446757_A_Validation_Model_of_Data_Input_for_Web_Services_Slides
17. Chen, X., et al.: ECFuzz: effective configuration fuzzing for large-scale systems
(2024). https://www.researchgate.net/publication/378029535_ECFuzz_Effective_Configura
tion_Fuzzing_for_Large-Scale_Systems
18. Li, Y., et al.: Third-party library dependency for large-scale SCA in the C/C++ ecosystem:
how far are we (2024). https://www.researchgate.net/publication/372363030_Third-Party_
Library_Dependency_for_Large-Scale_SCA_in_the_CC_Ecosystem_How_Far_Are_We
19. Cottam, J.A., et al.: Crossing the Streams: Fuzz testing with user input (2017). https://www.
researchgate.net/publication/322513903_Crossing_the_Streams_Fuzz_testing_with_user_i
nput
20. Syzkaller—kernel fuzzer. GitHub. https://github.com/google/syzkaller
21. Shatnawi, M.Q., Alazzam, B.: An assessment of eclipse bugs’ priority and severity prediction
using machine learning (2022). https://www.researchgate.net/profile/Batool-Alazzam/public
ation/359910074_An_Assessment_of_Eclipse_Bugs’_Priority_and_Severity_Prediction_U
sing_Machine_Learning/links/62561438b0cee02d69682406/An-Assessment-of-Eclipse-
Bugs-Priority-and-Severity-Prediction-Using-Machine-Learning.pdf
184 M. Lapina et al.

22. Basan, E., Lapina, M., Lesnikov, A., Basyuk, A., Mogilny, A.: Trust monitoring in a cyber-
physical system for security analysis based on distributed computing. In: Alikhanov, A.,
Lyakhov, P., Samoylenko, I. (eds.) Current Problems in Applied Mathematics and Computer
Science and Systems, pp. 430–440. Springer Nature Switzerland, Cham (2023). https://doi.
org/10.1007/978-3-031-34127-4_42
23. Proshkin, N.A., Basan, E.S., Lapina, M.A., Klepikova, A.G., Lapin, V.G.: Developing models
of IoT infrastructures to identify vulnerabilities and analyse threats. IOP Conf. Ser. Mater.
Sci. Eng. 873(1), 012018 (2020). https://doi.org/10.1088/1757-899X/873/1/012018
24. Maksimova, E., Lapina, M., Lapin, V.: Synthesis of Models for Ensuring Information Secu-
rity of Subjects of Critical Information Infrastructure under Destructive Influences CEUR
Workshop Proceedings this, vol. 3094, pp. 108–117 (2022)
25. Maksimova, E.A., Lapina, M.A., Lapin, V.G., Rusakov, A.M.: Anthropomorphic Model of
States of Subjects of Critical Information Infrastructure Under Destructive Influences. Lecture
Notes in Networks and Systems, pp. 569–580 (2022). https://doi.org/10.1007/978-3-030-
97020-8_51

View publication stats

You might also like