RNet : Ultra-Fast Python HTTP Client with Advanced TLS Fingerprinting

RNet : Ultra-Fast Python HTTP Client with Advanced TLS Fingerprinting
Fingerprint

In the rapidly evolving landscape of web scraping and HTTP automation, Python developers have long struggled with the limitations of traditional HTTP libraries. While requests has been the go-to choice for many, it lacks the sophisticated browser emulation capabilities needed to bypass modern anti-bot systems. Enter rnet, a revolutionary Python HTTP client that combines the performance of Rust with the convenience of Python, offering unparalleled TLS fingerprinting and browser emulation capabilities.

What is RNet?

RNet is a blazing-fast HTTP client for Python that's built on a Rust foundation, providing native-level performance with Python's ease of use. Unlike traditional Python HTTP libraries that rely on basic User-Agent spoofing, rnet offers sophisticated TLS fingerprinting, HTTP/2 support, and comprehensive browser emulation that makes your requests virtually indistinguishable from real browser traffic.

The library provides Python bindings for the powerful HTTP engine used in wreq, bringing enterprise-grade browser emulation capabilities to the Python ecosystem. This means you can enjoy the familiar Python syntax while benefiting from Rust's performance and advanced TLS handling. You can open the documentation if you want to understand the project in more detail)

Key Features

To find out about the advantages and uses-cases, you can take a look at our first article on the project : Wreq : Rust HTTP Client for Browser Emulation and TLS Fingerprinting

Getting Started

Install rnet using pip:

pip install rnet

Basic Usage

Here's a simple example to get you started:

import asyncio
from rnet import Client
from rnet.impersonate import Impersonate

async def main():
    # Create a client with Chrome emulation
    client = Client(
        impersonate=Impersonate.Chrome131,
        proxy="http://proxy.example.com:8080",  # Optional
        timeout=30
    )
    
    # Make a request
    response = await client.get("https://httpbin.org/json")
    print(f"Status: {response.status_code}")
    print(f"Content: {response.text}")
    
    # Clean up
    await client.close()

# Run the async function
asyncio.run(main())

Advanced Examples

E-commerce Price Monitoring System

import asyncio
import aiofiles
import json
from datetime import datetime
from rnet import Client
from rnet.impersonate import Impersonate

class PriceMonitor:
    def __init__(self):
        self.client = None
        self.products = [
            {
                "name": "iPhone 15 Pro",
                "url": "https://example-store.com/iphone-15-pro",
                "price_selector": ".price-current"
            },
            {
                "name": "Samsung Galaxy S24",
                "url": "https://example-store.com/galaxy-s24",
                "price_selector": ".product-price"
            }
        ]
    
    async def init_client(self):
        """Initialize the HTTP client with browser emulation."""
        self.client = Client(
            impersonate=Impersonate.Chrome131,
            timeout=30,
            follow_redirects=True
        )
        
        # Set realistic headers
        self.client.headers.update({
            "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
            "Accept-Language": "en-US,en;q=0.5",
            "Accept-Encoding": "gzip, deflate, br",
            "DNT": "1",
            "Connection": "keep-alive",
            "Upgrade-Insecure-Requests": "1",
            "Sec-Fetch-Dest": "document",
            "Sec-Fetch-Mode": "navigate",
            "Sec-Fetch-Site": "none",
            "Cache-Control": "max-age=0"
        })
    
    async def get_product_price(self, product):
        """Extract price from product page."""
        try:
            response = await self.client.get(product["url"])
            
            if response.status_code == 200:
                # Here you would parse the HTML to extract price
                # For demonstration, we'll simulate price extraction
                html = response.text
                
                # In a real scenario, you'd use BeautifulSoup or similar
                # from bs4 import BeautifulSoup
                # soup = BeautifulSoup(html, 'html.parser')
                # price_element = soup.select_one(product["price_selector"])
                # price = price_element.text.strip() if price_element else "N/A"
                
                # Simulated price extraction
                price = "999.99"  # This would be extracted from HTML
                
                return {
                    "product": product["name"],
                    "price": price,
                    "timestamp": datetime.now().isoformat(),
                    "status": "success"
                }
            else:
                return {
                    "product": product["name"],
                    "price": None,
                    "timestamp": datetime.now().isoformat(),
                    "status": f"error_{response.status_code}"
                }
                
        except Exception as e:
            return {
                "product": product["name"],
                "price": None,
                "timestamp": datetime.now().isoformat(),
                "status": f"error_{str(e)}"
            }
    
    async def monitor_prices(self):
        """Monitor prices for all products."""
        await self.init_client()
        
        try:
            tasks = [self.get_product_price(product) for product in self.products]
            results = await asyncio.gather(*tasks)
            
            # Save results to file
            async with aiofiles.open("price_data.json", "a") as f:
                for result in results:
                    await f.write(json.dumps(result) + "\n")
            
            return results
            
        finally:
            await self.client.close()

# Usage
async def main():
    monitor = PriceMonitor()
    results = await monitor.monitor_prices()
    
    for result in results:
        print(f"{result['product']}: {result['price']} ({result['status']})")

asyncio.run(main())

Social Media Data Collection

import asyncio
import json
from rnet import Client
from rnet.impersonate import Impersonate

class SocialMediaCollector:
    def __init__(self):
        self.client = None
        self.collected_data = []
    
    async def init_client(self):
        """Initialize client with mobile Safari emulation for better compatibility."""
        self.client = Client(
            impersonate=Impersonate.SafariIos17_4_1,
            timeout=30
        )
        
        # Mobile-specific headers
        self.client.headers.update({
            "Accept": "application/json, text/plain, */*",
            "Accept-Language": "en-US,en;q=0.9",
    		"User-Agent": (
        		"Mozilla/5.0 (iPhone; CPU iPhone OS 17_4_1 like Mac OS X) "
        		"AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.4.1 "
        		"Mobile/15E148 Safari/604.1"
    		),            
    		"X-Requested-With": "XMLHttpRequest",
            "Sec-Fetch-Mode": "cors",
            "Sec-Fetch-Site": "same-origin"
        })
    
    async def collect_posts(self, hashtag, limit=50):
        """Collect posts for a specific hashtag."""
        await self.init_client()
        
        try:
            # Example API endpoint (replace with actual endpoint)
            api_url = f"https://api.example-social.com/posts"
            
            params = {
                "hashtag": hashtag,
                "limit": limit,
                "sort": "recent"
            }
            
            response = await self.client.get(api_url, params=params)
            
            if response.status_code == 200:
                data = response.json()
                posts = data.get("posts", [])
                
                processed_posts = []
                for post in posts:
                    processed_post = {
                        "id": post.get("id"),
                        "text": post.get("text"),
                        "likes": post.get("likes", 0),
                        "shares": post.get("shares", 0),
                        "timestamp": post.get("created_at"),
                        "hashtag": hashtag
                    }
                    processed_posts.append(processed_post)
                
                self.collected_data.extend(processed_posts)
                print(f"Collected {len(processed_posts)} posts for #{hashtag}")
                
                return processed_posts
            else:
                print(f"Failed to collect posts: {response.status_code}")
                return []
                
        except Exception as e:
            print(f"Error collecting posts: {e}")
            return []
        finally:
            await self.client.close()
    
    async def analyze_engagement(self):
        """Analyze engagement metrics from collected data."""
        if not self.collected_data:
            return {}
        
        total_posts = len(self.collected_data)
        total_likes = sum(post["likes"] for post in self.collected_data)
        total_shares = sum(post["shares"] for post in self.collected_data)
        
        return {
            "total_posts": total_posts,
            "total_likes": total_likes,
            "total_shares": total_shares,
            "avg_likes_per_post": total_likes / total_posts,
            "avg_shares_per_post": total_shares / total_posts
        }

# Usage
async def main():
    collector = SocialMediaCollector()
    
    # Collect posts for multiple hashtags
    hashtags = ["python", "webdev", "machinelearning"]
    
    for hashtag in hashtags:
        await collector.collect_posts(hashtag, limit=100)
        await asyncio.sleep(1)  # Rate limiting
    
    # Analyze results
    metrics = await collector.analyze_engagement()
    print(f"Analysis Results: {metrics}")

asyncio.run(main())

Multi-Browser API Testing

import asyncio
import time
from rnet import Client
from rnet.impersonate import Impersonate

class APITester:
    def __init__(self):
        self.test_results = {}
        self.emulations = [
            Impersonate.Chrome131,
            Impersonate.Firefox133,
            Impersonate.Safari18,
            Impersonate.Edge134,
            Impersonate.Opera116
        ]
    
    async def test_endpoint(self, url, emulation):
        """Test API endpoint with specific browser emulation."""
        client = Client(
            impersonate=emulation,
            timeout=30
        )
        
        try:
            start_time = time.time()
            response = await client.get(url)
            end_time = time.time()
            
            result = {
                "emulation": str(emulation),
                "status_code": response.status_code,
                "response_time": round((end_time - start_time) * 1000, 2),  # ms
                "content_length": len(response.content),
                "headers": dict(response.headers)
            }
            
            # Check for specific response characteristics
            if response.status_code == 200:
                try:
                    json_data = response.json()
                    result["json_valid"] = True
                    result["json_keys"] = list(json_data.keys()) if isinstance(json_data, dict) else []
                except:
                    result["json_valid"] = False
            
            return result
            
        except Exception as e:
            return {
                "emulation": str(emulation),
                "error": str(e),
                "status_code": None,
                "response_time": None
            }
        finally:
            await client.close()
    
    async def run_tests(self, urls):
        """Run tests across multiple URLs and emulations."""
        for url in urls:
            print(f"\nTesting: {url}")
            self.test_results[url] = {}
            
            # Create tasks for parallel testing
            tasks = [
                self.test_endpoint(url, emulation) 
                for emulation in self.emulations
            ]
            
            results = await asyncio.gather(*tasks)
            
            for result in results:
                emulation = result["emulation"]
                self.test_results[url][emulation] = result
                
                if result.get("status_code"):
                    print(f"  {emulation}: {result['status_code']} ({result['response_time']}ms)")
                else:
                    print(f"  {emulation}: ERROR - {result.get('error', 'Unknown')}")
    
    def generate_report(self):
        """Generate a comprehensive test report."""
        report = {
            "summary": {},
            "detailed_results": self.test_results
        }
        
        for url, results in self.test_results.items():
            successful_tests = sum(1 for r in results.values() if r.get("status_code") == 200)
            total_tests = len(results)
            
            avg_response_time = sum(
                r.get("response_time", 0) for r in results.values() 
                if r.get("response_time")
            ) / max(successful_tests, 1)
            
            report["summary"][url] = {
                "success_rate": f"{successful_tests}/{total_tests}",
                "avg_response_time": round(avg_response_time, 2)
            }
        
        return report

# Usage
async def main():
    tester = APITester()
    
    # Test multiple API endpoints
    test_urls = [
        "https://httpbin.org/json",
        "https://api.github.com/user",
        "https://jsonplaceholder.typicode.com/posts/1"
    ]
    
    await tester.run_tests(test_urls)
    
    # Generate and display report
    report = tester.generate_report()
    print("\n" + "="*50)
    print("TEST SUMMARY")
    print("="*50)
    
    for url, summary in report["summary"].items():
        print(f"URL: {url}")
        print(f"  Success Rate: {summary['success_rate']}")
        print(f"  Avg Response Time: {summary['avg_response_time']}ms")
        print()

asyncio.run(main())

Advanced Configuration

Custom impersonation settings

from rnet import Client
from rnet.impersonate import Impersonate, ImpersonateOS

# Create client with specific OS emulation
client = Client(
    impersonate=Impersonate.Chrome131,
    impersonate_os=ImpersonateOS.Windows,
    timeout=30
)

Proxy configuration

from rnet import Client
from rnet.impersonate import Impersonate

# HTTP proxy
client = Client(
    impersonate=Impersonate.Firefox133,
    proxy="http://username:password@proxy.example.com:8080"
)

# SOCKS proxy
client = Client(
    impersonate=Impersonate.Chrome131,
    proxy="socks5://username:password@proxy.example.com:1080"
)

Use context managers

from rnet import Client
from rnet.impersonate import Impersonate

async def safe_request():
    async with Client(impersonate=Impersonate.Chrome131) as client:
        response = await client.get("https://example.com")
        return response.text

Comparison with Other Python HTTP Libraries

Feature rnet requests aiohttp httpx
TLS Fingerprinting
Browser Emulation
Async Support
HTTP/2 Support
Performance Excellent Good Good Good
Memory Usage Low Medium Medium Low
Anti-bot Bypass

Conclusion

RNet represents a paradigm shift in Python HTTP client capabilities. By combining the performance of Rust with the simplicity of Python, it provides developers with unprecedented control over HTTP requests and browser emulation. Its sophisticated TLS fingerprinting capabilities make it an invaluable tool for web scraping, API testing, and any application that requires authentic browser behavior.

The library's async-first design ensures maximum performance for concurrent operations, while its comprehensive browser emulation capabilities enable successful interaction with even the most sophisticated anti-bot systems. Whether you're building a large-scale data collection system, conducting API testing, or simply need to bypass modern web protections, rnet provides the tools and performance to get the job done.

As web technologies continue to evolve and anti-bot measures become more sophisticated, libraries like rnet will become increasingly essential for legitimate use cases that require advanced HTTP client capabilities. The combination of Rust's performance with Python's ease of use makes rnet a compelling choice for any project that demands both speed and sophistication.