Can AI gather financial information?
- Niv Nissenson
- Apr 27
- 4 min read

Intro: Sometimes interacting with AI feels like chatting with “the Internet.” One of its biggest advantages is the ability to save time on online research. It sounds straightforward—but in previous information-gathering tasks, I’ve encountered hallucinations and noticeable gaps. So it was time to run a proper test: can AI reliably gather financial data?
The test: Can AI gather basic financial and stock data from mega tech public companies and put it in a table?
Success Criteria:
Pull the correct data for each company
Generate a correct table
Weighted result: Overall 4.2/5
Gemini 4.5
ChatGPT 4.1
Claude 3.9
Verdict: Yes AI can gather financial information effectively, although it can stumble on complicated data and still requires human oversight.
The Setup:
In order to do my post I had to design a good prompt that will allow the AI to succeed.
Prompt:
For a research that I’m doing on the following public companies I need:
Their 2025 total revenue
Their 2025 net income after tax
Their percentage stock gain in 2025
Their market cap at December 31, 2025
Their market cap today. I want the findings to be in a table.
The companies are (stock symbol): GOOG, AMZN, AAPL, META, MSFT and NVDA.
Test execution:
Gemini
I used Gemini's "deep search" tool and the "thinking" model which is designed to "solve complex problems".

It took Gemini several minutes to complete the task. It appears the model interpreted the prompt as a request for a full-scale analysis, producing an 18-page report filled with tables, narrative assessments, and detailed commentary on each company and its challenges.

It wasn’t exactly what I had in mind. However, Gemini did produce a table that included all the requested data and followed the format I specified. The next step was to verify its accuracy.

I the manually checked the table findings:
The financial data for 5 out of 6 companies was correct, despite a few curveballs. Some companies don’t report on a calendar year basis (Jan–Dec). For example, Apple’s fiscal year ends on 9/27 and Microsoft’s on 6/30. Gemini handled this well, correctly reconstructing 2025 figures from quarterly data.
NVIDIA was a different story. With a fiscal year ending 1/31, it’s not really possible to cleanly derive calendar-year 2025 results from reported data. Gemini didn’t flag this limitation in the table. Instead, it noted in the text: “For its fiscal year ending January 2026, which encapsulates the 2025 calendar year,” That’s not technically accurate—FY ending January 2026 does not fully represent calendar year 2025.
There were also issues with market cap. Rather than calculating NVIDIA’s value as of 12/31/25 (as requested), Gemini appears to have pulled a figure from its report, effectively using a 1/31/26 value instead.
Overall, this was a strong result.However, better disclosure around the NVIDIA limitation would have helped—and the 18-page analyst-style report was unnecessary for what was ultimately a simple data request.
ChatGPT
For ChatGPT, I used the “deep research” feature. Similar to Gemini, it produced a lengthy textual report—this time a more modest 8 pages—but it took the longest to complete (over fifteen minutes). It did include a table with all the requested data, although I would have preferred a simpler format.

I then reviewed the data in the report. ChatGPT handled reporting periods differently than Gemini. Instead of reconstructing calendar-year figures, it presented the companies’ reported fiscal-year results (including NVIDIA’s year ending January 31, 2026) and clearly disclosed that Apple, Microsoft, and NVIDIA do not report on a January–December basis.
Overall, I would consider ChatGPT’s output a success from a financial data perspective. There were some variations in the “current market cap,” but these were not hallucinations—ChatGPT acknowledged the limitations and explained the discrepancies in its sourcing.

Claude
Claude understood the assignment better and avoided producing an overly long report. Instead, it delivered a concise table along with a brief explanation highlighting key research notes. It also quickly identified that several companies have different fiscal year ends and clearly noted this.

Unlike Gemini, Claude did not attempt to reconstruct 2025 calendar-year revenue for Apple and Microsoft. Instead, it presented the companies’ reported fiscal-year results (July 2024–June 2025 for Microsoft and October 2024–September 2025 for Apple). It took a similar approach with NVIDIA, using the January 2025 fiscal-year report.
When prompted, Claude was able to aggregate the quarterly figures and produce a more aligned report.
However, it struggled with current market cap data even more than ChatGPT, resulting in noticeable variations. At first, these appeared to be hallucinations, but they were more likely the result of weak sourcing—pulling from news summaries rather than directly from market data.

Final Score Card:
Category | Gemini | ChatGPT | Claude |
Output Delivered | 4.5 | 4 | 3.5 |
Hallucinations | 5 | 4.5 | 4 |
Quality | 4 | 4 | 4.5 |
Ease of Use | 4.5 | 4 | 4 |
Reliability | 4.5 | 4 | 3.5 |
Bottom line | 4.5 | 4.1 | 3.9 |
Bottom line:
AI was able to gather data from credible sources, handle a few curveballs, and either resolve or disclose most of them. The end result was a solid report—though not without some issues, particularly around market cap accuracy.
Can a human do it better:
A trained human would likely produce a more precise result.
That said, this is exactly the kind of task you want a computer to handle—provided you don’t need to fully verify every data point.
Can Claude gather financial information?
Can ChatGPT gather financial information?
Can Gemini gather financial information?


