Assignment 05. Group No 39
Assignment 05. Group No 39
Assignment 05. Group No 39
GROUP NO 39
NO NAME REG. NUMBER SIGNATURE
1 JACKSON LEONARD MTUI NIT/BCS/2023/554 J.L.Mtui
2 JAIDI SHARAFI MTEPA NIT/BIT/2023/2317 J.s.mtepa
3 VICTOR SINDIKA MAGDAN NIT/BIT/2023/2078 V.s.magdan
4 JAMES FRANK KINGAZI NIT/BIT/2023/2085 J.f.kingazi
5 ABDULKARIM SELEMANI ALLY NIT/BIT/2023/2062 A.s.ally
6 GASPER BONIPHACE SHEMAGEMBE NIT/BIT/2023/2063 G.b.shemagembe
7 STEPHANO A SWAI NIT/BIT/2023/2251 S.a.swai
8 SHUKURU J SELEMANI NIT/BIT/2023/2121 s.j.selemani
9 TIMOTH JENSEN PETER NIT/BIT/2023/2201 T.j.peter
10 GASPER JENAI NGAILO NIT/BIT/2023/2275 G.j.ngailo
1. Introduction to Transactions
Write a brief introduction (200-300 words) explaining what a transaction is in the context
of DBMS. Include the importance of transactions in ensuring data integrity and
consistency
The Database Management Systems (DBMS), a transaction refers to a logical unit of work
performed within a database system. It encapsulates a series of operations, such as insertions,
deletions, or updates that are treated as a single indivisible unit. This means that either all of
the operations within the transaction are successfully completed, or none of them are,
ensuring the integrity and consistency of the database. Transactions are vital components of
database systems, primarily because they play a pivotal role in maintaining data integrity and
consistency. Here's why:
2. ACID Properties
Explain the ACID properties of transactions in detail:
Durability: Ensure that once a transaction has been committed, it remains so, even
in the event of a system failure.
Durability ensures that once a transaction has been committed, its changes
are permanently saved and persisted, even in the event of a system failure
or crash.
This property guarantees that the effects of committed transactions persist
in the database and are not lost due to transient failures. It ensures that
data remains consistent and recoverable, even after unexpected system
failures.
Durability is typically achieved by maintaining transaction logs or journals,
which record all committed transactions and their corresponding changes.
These logs are used for recovery purposes to restore the database to a
consistent state after a failure.
3. Concurrency Control Mechanisms: Discuss the need for concurrency control in DBMS. Explain
the following mechanisms with examples:
4. Isolation Levels: Describe the different isolation levels defined by the SQL standard:
Read Uncommitted:
In this isolation level, transactions can see changes made by other transactions even
before they are committed. It allows the highest degree of concurrency but poses the
risk of encountering dirty reads, non-repeatable reads, and phantom reads.
Example:
Read Committed:
Read Committed isolation level allows transactions to see only changes that have
been committed by other transactions. It eliminates dirty reads but still allows
nonrepeatable reads and phantom reads.
Example:
o Transaction T1 reads a row from a table. o Meanwhile, Transaction T2 updates
the same row and commits the changes.
o If T1 reads the row again, it will see the updated value because the changes
made by T2 are now committed.
Repeatable Read:
Repeatable Read isolation level ensures that once a transaction reads a data item, it will
see the same value throughout the transaction, even if other transactions modify the data.
It prevents non-repeatable reads but still allows phantom reads. Example:
Serializable:
Serializable isolation level provides the highest level of consistency by ensuring that
transactions execute as if they were serially executed, even though they may be
executed concurrently. It eliminates all concurrency anomalies, including dirty reads,
non-repeatable reads, and phantom reads. Example:
o Transaction T1 reads a row from a table. o Meanwhile, Transaction T2 tries to
update the same row, but it is blocked until T1 completes.
o Once T1 finishes its transaction, T2 can proceed with its update, ensuring that
no anomalies occur.
5. Implementation Task
Implement a simple transaction management system using a relational database (e.g.,
MySQL, PostgreSQL). Create a database with at least two tables and demonstrate the
following:
Then let us create a simple transaction management system using MySQL. In this example,
we'll create a database with two tables: customers and orders. We'll demonstrate the
successful completion of a transaction, a failed transaction that is rolled back, and the
impact of different isolation levels.
USE transaction_demo;
CREATE TABLE customers ( customer_id INT PRIMARY KEY AUTO_INCREMENT, name
VARCHAR(100)
);
COMMIT;
In this transaction, we insert orders for customers John and Alice. Since there are no
errors, the transaction is committed, and the changes are permanent.
ROLLBACK;
In this transaction, we attempt to insert orders for customers John and a non-existing
customer. Since there's a violation of the foreign key constraint, the transaction is rolled
back, and no changes are made to the database.
The use of different isolation levels and their impact on concurrent transactions.
Provide the SQL scripts and a brief explanation of your implementation.
To demonstrate the impact of isolation levels, we'll use the SET TRANSACTION ISOLATION LEVEL
statement before executing transactions. We'll set different isolation levels and perform
concurrent transactions to observe their effects.
COMMIT;
- End Session 1
In this scenario, Session 1 retrieves the total order amount for customer John twice.
Meanwhile, in Session 2, an update is performed on one of John's orders. The READ
COMMITTED isolation level ensures that Session 1 only sees committed changes. Therefore, the
second SELECT statement in Session 1 will reflect the updated total order amount after the
update in Session 2.
6. Performance Analysis
Analyze the performance impact of different concurrency control techniques and isolation
levels. Conduct experiments with multiple concurrent transactions and measure the throughput
and response time. Present your findings in a report.
Experiment Design:
Setup Database Environment:
o Measure throughput (transactions per second) and response time (time taken
for a transaction to complete) as performance metrics. o Use tools like Apache
JMeter or custom scripts to generate concurrent transactions and measure
performance.
Experiment Scenarios:
Execute Experiments:
o Run experiments for each scenario, ensuring consistent test conditions. o Record
throughput and response time for each scenario.
Analyze Results:
Report:
Introduction: o In this experiment, we aimed to analyze the performance impact of
different concurrency control techniques and isolation levels in a relational database
environment. We conducted experiments to measure throughput and response
time under various scenarios. Experimental Setup:
Experiment Results:
Scenario 1: Concurrency Control Techniques Locking:
Read Committed:
Repeatable Read:
Serializable:
References:
MySQL Documentation
Apache JMeter User Manual
7. Conclusion: Summarize your learnings from the assignment. Reflect on the importance of
transactions and concurrency control in DBMS and any challenges you encountered during the
implementation and analysis.
Through this assignment, I've gained valuable insights into the critical role of transactions and
concurrency control in database management systems (DBMS). Here are some key learnings
and reflections:
i. Importance of Transactions:
o Transactions ensure data integrity and consistency by allowing a set of
operations to be treated as a single, indivisible unit.
o They provide the ACID properties (Atomicity, Consistency, Isolation, Durability),
which are essential for maintaining data reliability and recoverability in multiuser
environments.
ii. Significance of Concurrency Control: o Concurrency control mechanisms regulate
concurrent access to shared data to prevent anomalies such as lost updates,
inconsistent reads, and data corruption.
o Different concurrency control techniques, including locking, timestamp-based
concurrency, and optimistic concurrency control, offer trade-offs between
concurrency and consistency.
iii. Challenges Encountered: o Implementing and testing concurrency control mechanisms
and isolation levels require careful planning and execution.
o Designing realistic experiments to measure the performance impact of these
mechanisms involves setting up complex test environments and defining
appropriate metrics. o Analyzing the results and interpreting performance
metrics require a deep understanding of database concepts and performance
tuning techniques.
iv. Key Takeaways: o The choice of concurrency control technique and isolation level
depends on factors such as application requirements, workload characteristics, and
performance goals.
o Optimizing database performance requires a balance between concurrency,
consistency, and scalability.
o Continuous monitoring and tuning of database configurations are essential to
adapt to changing workloads and ensure optimal performance.