Excelize 2.10.0 Released - Open-source library for spreadsheet (Excel) document

We are pleased to announce the release of version 2.10.0. Featured are a handful of new areas of functionality and numerous bug fixes.

A summary of changes is available in the Release Notes.

Release Notes

The most notable changes in this release are:

Breaking Change

  • Upgrade requirements Go language version is 1.24.0 or later, for upgrade of dependency package golang.org/x/crypto

Notable Features

  • Add new exported error variable ErrTransparency
  • Add new ChartDashType, CustomProperty and ZipWriter data types
  • Add new field Border to the ChartMarker data type
  • Add new field Font to the ChartLegend data type
  • Add new field Legend to the ChartSeries data type
  • Add new field Transparency to the Fill data type
  • Add new fields Dash and Fill to the ChartLine data type
  • Add new field TmpDir to the Options data type, support to specifies the custom temporary directory for creating temporary files, related issue 2024
  • Add new field Charset to the Font data type, support to explicitly specify font encodings when generating spreadsheets
  • Add new functions GetCustomProps and SetCustomProps support getting and setting workbook custom properties, related issue 2146
  • Add new function SetZipWriter, support set custom ZIP writer, related issue 2199
  • Add optional parameter withoutValues for the GetMergeCells function
  • The DeleteDataValidation function support delete data validation in extension list, and support delete data validation by given with multiple cell ranges with reference sequence slice or blank separated reference sequence string, related issue 2133
  • The AddChart function support set dash line and marker border type of charts
  • The AddChart function support to set font for chart legends, related issue 2169
  • The AddChart and AddChartSheet function support create 4 kinds of box and whisker stock charts: High-Low-Close, Open-High-Low-Close, Volume-High-Low-Close and Volume-Open-High-Low-Close
  • The CalcCellValue function support BAHTTEXT formula function
  • Skip fallback to default font size when create style if font size less than minimum size
  • Support parse number format code with Hijri and Gregorian calendar
  • Support set transparency for chart and shape, related issue 2176
  • Support apply number format with the new 8 language: Corsican, Croatian, Croatian (Latin), Czech, Danish, Divehi, Dutch, Dzongkha language

Improve the Compatibility

  • Remove all leading equal symbol when set cell formula, for improve compatibility with Apple Numbers, related issue 2145
  • Using relative sheet target path in the internal workbook relationship parts

Bug Fixes

  • Fix a v2.9.1 regression bug, build failed on ARMv7 architectures, resolve issue 2132
  • Fix number format parser dropped empty literals in the end of the number format
  • Fix panic on get string item with invalid offset range, resolve issues 2019 and 2150
  • Fix panic on read unsupported pivot table cache sorce types, resolve issue 2161
  • Fix incorrect characters verification, count characters as single runes in characters length limitation checking, resolve issue 2167
  • Fix add pivot table caused workbook corrupted on Excel for Mac, resolve issue 2180
  • Fix incorrect month name abbreviations when read cell with the Tibetan language number format code
  • Fix special date number format result not consistent with Excel, resolve issue 2192

Performance

  • Optimize the GetSheetDimension function by parse worksheet XML in stream mode, speedup about 95%, memory usage reduce about 96%

Miscellaneous

  • The dependencies module has been updated
  • Unit tests and godoc updated
  • Documentation website with multilingual: Arabic, German, English, Spanish, French, Italian, Japanese, Korean, Portuguese, Russian, Chinese Simplified and Chinese Traditional, which has been updated.
  • excelize-wasm NPM package release update for WebAssembly / JavaScript support
  • excelize PyPI package release update for Python
  • ExcelizeCs NuGet .Net package release for C#
  • Add a new logo for Excelize

Thank you

Thanks for all the contributors to Excelize. Below is a list of contributors that have code contributions in this version:

  • DengY11 (Yi Deng)
  • JerryLuo-2005
  • aliavd1 (Ali Vatandoost)
  • xiaoq898
  • Now-Shimmer
  • Jameshu0513
  • mengpromax (MengZhongYuan)
  • Leopard31415926
  • hongjr03 (Hong Jiarong)
  • juefeng
  • black-butler
  • Neugls
  • Leo012345678
  • a2659802
  • torotake
  • crush-wu
  • zhuyanhuazhuyanhua
  • shcabin
4 Likes

As always, thanks for your work. Excelize is excellent.

Thanks for this. I didn’t know about it before. I might need this sometime soon! :sweat_smile:

(For cross readers: Excel Files are just ZIP files. Simply rename, open them, check a simple table out - then you see what the team needs to overcome :exploding_head:)

If you need inspiration, please consider implementing or looking into this feature:

Implement a way to read XLS files (I think this is already done?) and then convert them to plain text so that an LLM can understand them. This needs a fixed format and some testing to ensure data is clearly defined. For example, it needs to be clear whether a cell value is a literal value or the result of a calculation. Furthermore, it needs to be clear which cell is being referenced.

Example: “A2” + “D23” is only clear if we know what the heck cell “D23” contains.

Example app: With this feature, you could just upload an XLS file to the app, and the app would convert it. Then you could ask the LLM questions about it.

Reliable writing would be the icing on the cake. So basically LLM Text → XLS.

Good to see you, Karl. I’ve used many Excel libraries and this one is the best I’ve tried. Ironically I prefer it over Microsoft’s .NET library. It’s fast, intuitive, and easy to use.

Why wouldn’t you have your LLM read the Excel file directly? I asked ChatGPT what it can do with Excel files and here’s what it said:

Yes :white_check_mark: — if you upload an Excel file (.xlsx or .xls), I can:

  • :open_book: Read its contents (e.g., extract rows, columns, sheet names, cell values)
  • :writing_hand: Modify it (add/edit/delete cells, rows, columns, or entire sheets)
  • :abacus: Do calculations, transformations, or filtering
  • :floppy_disk: Then export a new Excel file back to you with the changes applied.

Typical things I can help with:

  • Cleaning data (removing empty rows, normalizing formats, trimming spaces)
  • Combining or splitting columns
  • Adding calculated fields
  • Reformatting dates, numbers, etc.
  • Sorting or grouping data
  • Generating summary tables or reports

:paperclip: Just upload your Excel file, and tell me what kind of modification or extraction you want.

So, it looks like converting Excel files to text is redundant when you can have your LLM read the file directly. Also, I would contend a project like that would be better handled outside the Excelize library itself (separation of duties!).

1 Like

Good to hear from you as well, Dean. :waving_hand:

LLMs can’t read Excel files directly because they’re purely text-based in their core, generate output token-by-token and only work with text input.
What happens under the hood is that tools like ChatGPT extract the file content in their application layer as text before the LLM processes it.

However, there’s a better approach: handle text extraction in your OWN application layer and pass only the data you need to the LLM.

This approach gives you several advantages.
First, you’re not locked into LLMs that support file parsing—you can use cheaper models, self-hosted solutions, or switch providers mid-application without changing your core logic.

Second, since LLMs are stateless and need the full conversation context on every call, managing that context yourself lets you be surgical about what gets passed in. This matters because irrelevant or incorrect information in the context degrades model performance—if an earlier wrong answer stays in the context, it continues to confuse the model downstream.

By controlling state as a developer, you can strategically feed the LLM only relevant data, which improves response quality while reducing token usage and costs. You also get flexibility to switch between providers or models on the fly based on performance or budget.

2 Likes