Extractor — Cursor
That’s your first extraction. From there, build your own extractor library.
@workspace Scan all .log files in /logs directory. Extract: error_code, timestamp, endpoint, status_code. Output: single JSON file with each entry keyed by filename. Ignore lines without errors. Save to /extractor/output/errors.json Cursor will generate a script or directly extract depending on your settings. File: extractor/run_extractor.py Cursor Extractor
def extract_from_text(self, text: str, file_path: str = None): entry = "_source": file_path for field, pattern in self.schema.items(): match = re.search(pattern, text, re.IGNORECASE | re.MULTILINE) entry[field] = match.group(1) if match else None self.results.append(entry) return entry That’s your first extraction
find data/raw -name "*.log" | entr -r python extractor/run_extractor.py Then ask Cursor AI: “Show me the diff of extracted errors between the last two runs.” Cursor Extractor can output to: That’s your first extraction. From there
extractor.save("extractor/output/structured_logs.json")