Here's an adaptation of the program from the asynchronous programming section that fetches URLs and writes the contents retrieved from each website to its own file in the current default directory.
This version uses the aiohttp library for making HTTP requests asynchronously and incorporates file writing operations:
import asyncio
import aiohttp
async def fetch_and_save(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
content = await response.text()
# Create a filename from the URL
file_name = url.replace('https://', '').replace('http://', '').replace('/', '_') + '.txt'
# Write the content to a file in the default directory
with open(file_name, 'w', encoding='utf-8') as f:
f.write(content)
print(f"Content from {url} has been written to {file_name}")
async def main():
urls = ["http://example.com", "https://api.github.com"]
# Start tasks to fetch URLs and write contents to files
await asyncio.gather(*(fetch_and_save(url) for url in urls))
# Run the main function to start the async program
asyncio.run(main())
Explanation of Changes:
Function fetch_and_save: This function handles both fetching the website content and writing it to a file.
It generates a filename by stripping the protocol from the URL (http or https) and replacing slashes with underscores to avoid directory traversal issues.
It then writes the content it retrieves to this file.
File Writing: The open() function is used to create and write to the file in 'write' mode ('w'). This ensures that any existing file with the same name is overwritten. The filename is created dynamically from the URL to ensure uniqueness for each site's content.
Error Handling: Basic error handling such as checking the response status or handling exceptions during the HTTP request or file operations is assumed to be part of the production-level code and should be added based on specific requirements.
This script demonstrates how to combine asynchronous HTTP requests with file I/O in Python, useful for web scraping tasks, data collection, or caching web content locally.