This project is a Java-based microservice built with Spring Boot that tracks live sports events. For each event marked as "live," it periodically calls a mock REST API every 10 seconds, transforms the data, and publishes it to a Kafka message topic.
A summary of the key design and technology choices made for this project.
-
Framework: Spring Boot was chosen for its rapid development capabilities, embedded web server, and simplified dependency management, making it ideal for creating a standalone microservice quickly.
-
Scheduling: A
ScheduledThreadPoolExecutor
is used to manage the periodic REST calls. This choice provides direct control over the scheduling, execution, and cancellation of polling tasks for each live event, which is managed within theEventService
. -
State Management: The state of live events is managed in-memory using a
ConcurrentHashMap
. This approach is simple, thread-safe, and sufficient for the scope of this prototype, avoiding the need for an external database. -
Messaging: Apache Kafka was selected as the message broker for its high-throughput, scalability, and durability, which are well-suited for handling real-time data streams like sports updates. The
spring-kafka
library simplifies integration. -
Error Handling & Retries: Spring Retry (
@Retryable
) was implemented for handling transient failures during message publication to Kafka. This declarative approach is robust, easy to configure with backoff policies, and keeps the business logic clean of complex retry code. A@Recover
method provides a fallback for final failures. -
API Mocking: A simple Spring
@RestController
is included within the same service to act as the mock external API. This removes external dependencies during development and testing, making the project self-contained and easy to run.
Follow these steps to get the application running.
- Java Development Kit (JDK) 17 or later.
- Docker and Docker Compose.
The project includes a docker-compose.yml
file to run Kafka and Zookeeper. Open a terminal in the project root and run:
docker-compose up -d
A shell script is provided to create the necessary Kafka topic. First, make the script executable, then run it:
chmod +x create-topic.sh
./create-topic.sh
./gradlew bootRun
The service will start and be ready to receive requests on port 8080.
You can use curl to interact with the service.
- Start polling for an event (mark as "LIVE"):
curl -X POST http://localhost:8080/events/status \
-H "Content-Type: application/json" \
-d '{"eventId": "event123", "status": "LIVE"}'
- Stop polling for an event (mark as "NOT_LIVE"):
curl -X POST http://localhost:8080/events/status \
-H "Content-Type: application/json" \
-d '{"eventId": "event123", "status": "NOT_LIVE"}'
To run all the unit and integration tests included in the project, use the Gradle wrapper:
./gradlew test
This command will execute all tests and generate a report located in the build/reports/tests/test/ directory. The tests cover:
- Status updates via the REST endpoint.
- Scheduled call creation and cancellation.
- Message publication under both normal and error conditions, including the retry logic.
- Boilerplate Code: The initial project structure, build.gradle file, and main application class were generated by the AI.
- Unit Tests: A Couple of unit tests for the controller, service, and Kafka producer, using Mockito and Spring's testing framework, was generated.