Find Duplicates in Array Online
Paste two arrays to spot duplicates, repeated values, and data quality issues — element by element.
🔒 100% private — runs entirely in your browseror try sample data
Paste two arrays to spot duplicates, repeated values, and data quality issues — element by element.
🔒 100% private — runs entirely in your browseror try sample data
Finding duplicates in an array is one of the most common tasks in data processing. Whether you are cleaning up a dataset, validating user input, or ensuring referential integrity in a database, detecting repeated elements is a critical first step. This tool lets you compare an array against a deduplicated version — or against any other array — to instantly see which values appear more than once.
Duplicate detection matters across every layer of software development. In frontend applications, duplicate entries in dropdown lists or autocomplete suggestions create confusing user experiences. In backend systems, duplicate records in database tables lead to incorrect aggregations, inflated counts, and wasted storage. In data pipelines, duplicates introduced during ETL processes can cascade through downstream systems and produce misleading analytics.
Paste your arrays as JSON (e.g., ["a", "b", "a"]). The comparison runs entirely in your browser — no data is transmitted to any server. The tool handles strings, numbers, booleans, nested objects, and mixed-type arrays. Use the "Ignore array order" option when element positions do not matter for your use case.
const items = ["alice", "bob", "charlie", "alice", "dave", "bob"]; const seen = new Set();
const duplicates = new Set(); for (const item of items) { if (seen.has(item)) { duplicates.add(item); } seen.add(item);
} console.log([...duplicates]); // ["alice", "bob"] // One-liner alternative:
const dupes = items.filter((item, i) => items.indexOf(item) !== i);
console.log([...new Set(dupes)]); // ["alice", "bob"]The Set approach runs in O(n) time. The filter + indexOf one-liner is more concise but O(n^2) — avoid it for large arrays.
from collections import Counter items = ["alice", "bob", "charlie", "alice", "dave", "bob"] counts = Counter(items)
duplicates = [item for item, count in counts.items() if count > 1] print(duplicates) # ['alice', 'bob']
print(counts) # Counter({'alice': 2, 'bob': 2, 'charlie': 1, 'dave': 1}) # To get unique items only (remove duplicates):
unique = list(dict.fromkeys(items)) # preserves order
print(unique) # ['alice', 'bob', 'charlie', 'dave']Counter gives you both the duplicates and their frequencies. Use dict.fromkeys() instead of set() when order preservation matters.
import java.util.*;
import java.util.stream.*; List<String> items = List.of("alice", "bob", "charlie", "alice", "dave", "bob"); // Find duplicates using groupingBy + filtering
List<String> duplicates = items.stream() .collect(Collectors.groupingBy(e -> e, Collectors.counting())) .entrySet().stream() .filter(e -> e.getValue() > 1) .map(Map.Entry::getKey) .collect(Collectors.toList()); System.out.println(duplicates); // [bob, alice] // Alternative: HashSet-based detection
Set<String> seen = new HashSet<>();
Set<String> dupes = items.stream() .filter(e -> !seen.add(e)) .collect(Collectors.toSet()); System.out.println(dupes); // [bob, alice]The HashSet.add() trick exploits the fact that add() returns false when the element already exists. Clean and efficient for large collections.
In JavaScript, {"name": "alice"} === {"name": "alice"} is false because objects are compared by reference, not by value. Finding duplicate objects requires serializing them (e.g., JSON.stringify) or using a deep equality function. Be aware that key order affects JSON.stringify output.
"Alice" and "alice" are treated as different values in most languages. If your data has inconsistent casing, normalize it (e.g., .toLowerCase()) before checking for duplicates, or you will miss matches that a human would consider identical.
0.1 + 0.2 produces 0.30000000000000004 in JavaScript and most languages. Two values that look the same when printed may differ at the binary level. Round floating-point numbers to a fixed precision before comparing, or use an epsilon-based comparison.
Paste your array into one panel and a deduplicated version into the other panel, then click Compare. The tool highlights which elements are repeated. Alternatively, paste two different arrays to see which values overlap between them.
Yes. The tool performs deep structural comparison, so it can detect duplicate objects even when they contain nested properties. Two objects with identical key-value pairs at every level are recognized as duplicates.
Use a Set to track seen elements as you iterate. If set.has(item) returns true before you add it, the item is a duplicate. This approach is O(n) time and O(n) space — the best you can achieve for unsorted data.
Yes. This tool runs entirely in your browser using client-side JavaScript. Your array data is never transmitted to any server, making it safe for sensitive, proprietary, or production data.
Finding duplicates looks for repeated elements within a single collection, while comparing two arrays identifies elements that were added, removed, or changed between them. This tool supports both — paste the same array in both panels (one with and one without duplicates) to isolate repeated values.
Yes. The tool handles nested structures including arrays within arrays and objects within arrays. Nested elements are compared at every depth level, so duplicate nested structures will be detected accurately.