icon picker
Timeline

Below is the proposed timeline with key phases and milestones for the project. The timeline assumes development can start immediately and is an ideal scenario, subject to adjustment based on resource availability and any unforeseen complexities. Time estimates are given in weeks, and milestones mark the completion of major deliverables:
High Level Timeline
Phase 1: Requirements & Design
Phase 2: Prototype Scraper Development
Phase 3: Multi-site Scraper Implementation
Phase 4: Data Pipeline Integration
Phase 5: Deployment Setup
Phase 6: Testing & QA
Phase 7: Launch & Handover
Apr 2025
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
May 2025
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Jun 2025
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Jul 2025
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Aug 2025
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Sep 2025
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Oct 2025
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Nov 2025
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Dec 2025
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Phase 1: Requirements & Design
Phase 2: Prototype Scraper Development
Phase 3: Multi-site Scraper Implementation
Phase 4: Data Pipeline Integration
Phase 5: Deployment Setup
Phase 6: Testing & QA
Phase 7: Launch & Handover
Month
TodayFit
View of High Level Timeline
Phase & Milestone
Deliverables/Details
Start Date
End Date
Duration
Phase 1: Requirements & Design
- Finalize PRD and obtain stakeholder sign-off. - Define data schema, select tech stack (confirm Python libraries, cloud services). - Design architecture in detail (module structure, deployment approach).
4/8/2025
4/14/2025
7 days
Phase 2: Prototype Scraper Development
- Develop a prototype scraper for one site (e.g., JustLease.nl) as a proof of concept. - Validate data extraction, adjust for any blockers (e.g., if site has anti-scraping measures). - Demo the single-site scrape output (JSON) to stakeholders for feedback on data fields and format.
4/13/2025
4/26/2025
14 days
Phase 3: Multi-site Scraper Implementation
- Implement scrapers for all target sites (DirectLease, 123Lease in addition to JustLease). - Ensure each scraper module is complete with all required fields extracted. - Run scrapers in succession and then together to test the combined data pipeline. - Initial internal test of matching logic on the aggregated data (ensure that same models from the 3 sources are being linked correctly).
4/21/2025
5/3/2025
13 days
Phase 4: Data Pipeline Integration
- Develop the normalization & matching pipeline as a unified process after scraping. - Integrate all scrapers’ outputs into the pipeline, producing one consolidated dataset. - Implement the central data storage (set up database or file output). - Produce a sample full JSON output covering all sites.
4/27/2025
5/9/2025
13 days
Phase 5: Deployment Setup
- Set up the chosen cloud environment (e.g., AWS or Azure). - Deploy scrapers and pipeline to cloud (for example, containerize and deploy to an AWS EC2 or set up AWS Lambda functions). - Configure the scheduler in the cloud (e.g., AWS EventBridge rule or Azure Scheduler) for automated runs. - Implement monitoring/alerting in the cloud context (CloudWatch alarms, etc.).
4/13/2025
4/21/2025
9 days
Phase 6: Testing & QA
- Functional testing: Verify that the data collected matches what’s on the websites, for multiple samples from each site. Adjust scrapers for any missed fields or incorrect parsing. - Performance testing: Run the full pipeline and measure completion time; test parallel runs; ensure it finishes within expected window. - Error simulation: Intentionally break a selector or simulate a site change to ensure the error handling and alerts work as expected. - Integration testing: Have the AutoCompare platform ingest the JSON output in a staging environment to ensure compatibility and that all needed data is present.
5/4/2025
5/17/2025
14 days
Phase 7: Launch & Handover
- Production launch: Activate the scheduled scraping in production. Verify the first automated run populates the comparison platform with correct data. - Knowledge transfer: Provide documentation to the client’s team on how the system works, how to add new sites, and how to respond to errors. - Agree on a maintenance plan (who will fix scrapers if a site changes, how future enhancements will be handled).
5/18/2025
5/29/2025
12 days
There are no rows in this table



Note on Timeline: This schedule outlines roughly 2-3 months (10 weeks) of development. If more resources (developers) are available, some phases could be done in parallel – for instance, scrapers for different sites could be built simultaneously to shorten Phase 3. Conversely, if this is a single-developer project or if the client adds more requirements, the timeline might extend. Regular checkpoints with the stakeholders (at end of each phase) are included to ensure alignment and to accommodate feedback early (for example, after Phase 2 prototype, we may refine requirements before proceeding).
Each milestone delivers a tangible output (prototype, dataset, deployment, etc.), allowing AutoCompare to track progress. By Phase 7, the goal is to have a fully functioning, automated scraping platform running in production, with the team ready to maintain it.

Team Responsibility Matrix (RACI)

Task
Backend Dev
Data Engineer
DevOps
QA
Build scrapers
Normalization pipeline
SBERT matching model
API development
Deployment (cloud/on-prem)
Monitoring setup
Test scraper accuracy
Final validation of features
There are no rows in this table


Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.