Last Updated : 12 Apr, 2025
We are given a string that may contain one or more URLs and our task is to extract them efficiently. This is useful for web scraping, text processing, and data validation. For example:
Input:
s = "My Profile: https://www.geeksforgeeks.org/user/Prajjwal%20/contributions/ in the portal of https://www.geeksforgeeks.org/"
Output:
Using re.findall()['https://www.geeksforgeeks.org/404.html/', 'https://www.geeksforgeeks.org/']
Python’s Regular Expressions (regex) module allows us to extract patterns like URLs from texts, it comes with various functions like findall(). The re.findall() function in Python is used to find all occurrences of a pattern in a given string and return them as a list.
Python
import re
s = 'My Profile: https://www.geeksforgeeks.org/404.html/ in the portal of https://www.geeksforgeeks.org/'
pattern = r'https?://\S+|www\.\S+'
print("URLs:", re.findall(pattern, s))
URLs: ['https://www.geeksforgeeks.org/404.html/', 'https://www.geeksforgeeks.org/']
Explanation:
urlparse() function from Python's urllib.parse module helps break down a URL into its key parts, such as the scheme (http, https), domain name, path, query parameters, and fragments. This function is useful for validating and extracting URLs from text by checking if a word follows a proper URL structure.
Python
from urllib.parse import urlparse
s = 'My Profile: https://www.geeksforgeeks.org/404.html/ in the portal of https://www.geeksforgeeks.org/'
# Split the string into words
split_s = s.split()
# Empty list to collect URLs
urls = []
for word in split_s:
parsed = urlparse(word)
if parsed.scheme and parsed.netloc:
urls.append(word)
print("URLs:", urls)
URLs: ['https://www.geeksforgeeks.org/404.html/', 'https://www.geeksforgeeks.org/']
Explanation:
urlextract is a third party library so to use it we need to first install it by giving the command "pip install urlextract" in out terminal, it offers a pre-built solution to find URLs in text. Its URLExtract class helps us to quickly identify URLs without needing custom patterns, making it a convenient choice for difficult extraction of URLs.
Python
from urlextract import URLExtract
s = 'My Profile: https://www.geeksforgeeks.org/user/Prajjwal%20/contributions/ in the portal of https://www.geeksforgeeks.org/'
extractor = URLExtract()
urls = extractor.find_urls(s)
print("URLs:", urls)
Urls: ['https://www.geeksforgeeks.org/user/Prajjwal%20/contributions/', 'https://www.geeksforgeeks.org/']
Explanation:
One simple approach is to split the string and check if each word starts with "http://" or "https://" using .startswith() built-in method, we can use .split() function to split the string and then check each word, if it starts with "http://" or "https://". If it does, we add it to our list of extracted URLs.
Python
s = 'My Profile: https://www.geeksforgeeks.org/404.html/ in the portal of https://www.geeksforgeeks.org/'
x = s.split()
# Empty list to extract the URL
res=[]
for i in x:
if i.startswith("https:") or i.startswith("http:"):
res.append(i)
print("Urls: ", res)
Urls: ['https://www.geeksforgeeks.org/404.html/', 'https://www.geeksforgeeks.org/']
Explanation:
find() is a built-in method in Python that is used to find a specific element in a collection, so we can use it to identify and extract a URL from a string. Here's how:
Python
s = 'My Profile: https://www.geeksforgeeks.org/404.html/ in the portal of https://www.geeksforgeeks.org/'
split_s = s.split()
res=[]
for i in split_s:
if i.find("https:")==0 or i.find("http:")==0:
res.append(i)
print("Urls: ", res)
Urls: ['https://www.geeksforgeeks.org/404.html/', 'https://www.geeksforgeeks.org/']
Explanation:
Related Articles:
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4