Redesigned autofilter popup

I’m happy to announce that I’ve managed to squeeze this new feature in just in time for the 3.5 code freeze.

What’s new?

As I’ve mentioned briefly in G+, I’ve been working on brushing up the age-old autofilter popup window in the past few weeks. I have no idea how old the old one is, but it’s been there for as long as I remember. In case anyone needs a reminder as to what the old one looks like, here it is.

It’s functional, yet very basic. While this has served us for many years since the last century, it was also clear that the world has since moved on, and the people has started craving for modern looks and eye candies even in the office productivity applications. Clearly, it was time for a change.

In contrast to the old, here is how the new one looks:

I don’t know about you, but I really like the new one better. :-)

Motivation

Aside from updating the aged look of the old popup, I was also motivated to introduce the new popup for its ability to allow selection of multiple values from the selection list.

As you can see in the screenshot, the old one allowed only one value to be selected for each given column, which was not only very limiting but also caused interoperability issues with Excel documents, especially with those created in Excel 2007 and newer. In fact, adding this feature has been my long-term goal, ever since I began working on OpenOffice.org code base professionally. Because of this background, I had my personal attachment to fulfill that goal, and I’m really glad to have finally landed this feature 4 years and one name change (OOo to LibreOffice) later!

Laying the foundation

You may think that this new popup looks somewhat familiar. That’s because the same popup is also used as the pivot table (formerly data pilot) field member selection popup. I’ve touched on this previously on my blog, and you’ll probably notice the similarity when comparing the screenshot of the new popup with the screenshot of the pivot table popup included in that post.

Internally these two use the same code. In fact, when I developed that feature for the pivot table, I intentionally designed it to be re-usable, precisely so that I could use it for the autofilter popup at a later time.

So, the hard part of implementing the new popup had already been finished. All I had to do was to put the autofilter functionality into the popup and launch it instead of the ugly old one, which is precisely what I did to bring the new popup into reality. I also had to refactor the code that performs the filtering to allow multi-value matching, which was, while invisible to the users, not a trivial task.

Going forward

The work is not totally done yet. As of this writing, the xlsx filter has not been fully adopted to take advantage of the new multi-selection capability, but that’s my next task, and I expect that to be done in time for 3.5.

Also, the menu still looks very basic, and contains only the same set of options that the old popup had. This was done deliberately in order for us to ship it in time for 3.5, by avoiding the rather expensive process of re-designing the menu part of the popup. But I expect we work on the re-design post-3.5, to make it even better and more usable. Note that the new popup is fully capable of doing sub menus, which gives us all sorts of possibilities.

Anyhow, that’s all I have to say about this at the moment. I hope you guys will enjoy the new and shiny autofilter popup! :-)

Notes for testing

As with any new features, this one needs lots of testing. I’ve written new unit test to cover some parts of it, but unit test can’t cover all corners of use cases (especially those involving UI interactions), and manual testing from real users is always appreciated. Some of the affected areas I can think of are:

  • Built-in functions MATCH, LOOKUP, HLOOKUP and VLOOKUP that use the core filtering code which I’ve heavily refactored.
  • Import and export of the existing filtering rules, with ods, xls, and xlsx.
  • Filtering with pivot tables, which shares parts of the filtering code that has been refactored.
  • Standard and advanced filter dialogs

So, watch out for the next daily build that includes this feature!

Import performance boost with form controls

This is another performance win.

I’ve just pushed changes to the master branch to improve the import performance of binary Excel documents containing tons of form controls. The test document I used had an upward of 500 form controls which, prior to the change, Calc would spend at least several minutes to load. I don’t know the exact amount of time it took to open the document because each time I tried to open it, I had to kill the app after I became too impatient to wait.

Long story short, the same document now opens under 6 seconds on my machine.

The poor performance in this particular case consisted of several different bottlenecks. They are

  • inefficient algorithm in registering event listeners for VBA events,
  • inefficient algorithm in querying the code name from the parent application,
  • unnecessary VBA event registration for form controls, and
  • sending unnecessary notifications to property value change listeners during import for each and every property value insertion.

Registering event listeners for VBA events

When each control is inserted, we register several VBA events for it in order to handle events from the VBA code. For each event, we would register by passing the target and listener pair to the handler that handles event notification. As it turned out, however, each time that happens, the handler has to introspect the type of the target because it is passed as UNO’s Any object. While each instance of that may take only a fraction of a second to complete, when calling it literally millions of times it adds up not to mention the fact that the target remains the same for 12 or so listeners that are being registered for each control.

To solve this, I added a new method to register multiple event listeners for an identical target in a single call, to avoid repeated and unnecessary introspection of the target type. This alone has resulted in reducing the load time significantly (66% load-time reduction with my test document). However, this was still not enough with a larger number of controls since, as the number of controls grew, the load time would increase almost quadratically.

Querying the code name from the parent application

Another issue was the algorithm responsible for looking up the “code name” of the VBA module that the control belongs to. The code name is the name associated with each VBA module that Excel creates for each sheet. The name of the module does not necessarily equal the name of the sheet, and is unique to each sheet. The old algorithm would go through all existing form control instances in order to find a match, then backtrack the sheet it is on in order to determine the correct code name. But because it had to iterate through all existing controls, as the number of the controls grew, so would the time it takes to find a match.

Since the code name is identical for each sheet, there was no reason to check every single control. So I added a new method to get the code name directly from the parent container of the controls. Since we only create one container per sheet at most, this has resulted in making the code name lookup independent of the number of controls, and has resulted in quasi-constant time lookup since the number of sheets doesn’t grow during the import.

Unnecessary VBA event registration for form controls

There are two types of controls that Excel supports. One is the older form controls that you can insert via Forms toolbar, while the other is the newer, OLE controls that you can insert via Control Toolbox toolbar. As luck would have it, Excel doesn’t support bindings to VBA with the form controls, so it was not necessary to register events for these guys when we import them (as Noel told me). Turning off event registration for form control import has surely cut down the load time significantly. Many thanks to Noel for giving me a patch to turn this off for form controls.

Property value change listeners

Even after all these performance bottlenecks squashed, the load time still didn’t feel as fast as it should be. So, I did another round of profiling. It indicated that, every time we set a new property value to a control via XPropertySet, we would notify all property value change listeners to allow them to react to the change or veto the change, and this happened unconditionally for every single property value insertion for every single control.

Since the likelihood of having to veto or change other property values based on a new property value insertion during file import is close to nil if not zero, I added a new API to temporarily turn off this notification. This has cut down the last few seconds off the overall load time, down to 6 seconds in total. This notification is turned back on after the loading is complete.

Future consideration

There are several opportunities for future work. For one thing, the code name lookup also applies to the VBA event support in Writer. But because I wasn’t aware of how Writer organizes form controls, I didn’t touch its lookup algorithm. So, if the same inefficiency applies to Writer (which I’m not sure it does), then there may be a way to improve performance in that area.

Another area to consider is reducing the number of VBA events to register. As Noel told me, we currently register 12 or so events unconditionally for all controls imported from Excel documents. But technically we only have to register events that are actually needed. So, if we can find a way to determine what events we need to register by either parsing the VBA code or any other ways, we can reduce the number of VBA event registrations during the import.

This is all I can think of at the moment. Thank you ladies and gentlemen.

New option to specify initial number of sheets

This just landed on the master branch.

Starting with LibreOffice 3.5, you can now specify the initial number of sheets that new documents will have. Previously, this was hard-coded to be 3 sheets in all cases no matter what. While this didn’t seem to bother a whole lot of people based on how little bug reports we’d received on this, it did bother some users enough so that one of them have decided to code up a patch to make it happen. Now, without further ado, let’s take a look at the new option page:

new-doc-options

where you can change the number of worksheets in new document, which becomes effective the next time you create a new document.

Last but not least, the name of the person who made this all happen is Albert Thuswaldner. Please give kudos to him for his excellent work. :-)

Named range as data source in DataPilot table

I have hinted in my previous post that you can now use a named range as the data source of a DataPilot table, but you couldn’t create a new DataPilot table with a named range as the source.

Well, now you can.

I tried to come up with a clever way to add this functionality, but ended up with just another radio button in the existing source selection dialog (the dialog that pops up when you select Data – DataPilot – Start without an existing DataPilot table).

Here is a screenshot of the new dialog as evidence:
calc-dp-named-range-source

This functionality is currently available on the master branch of LibreOffice. For those of you who can build LibreOffice directly from the repository, go check it out!

For those of you who would rather wait for a released version, this will be available in 3.4 – the next minor release. Refer to this page for more detailed release plan of the upcoming versions of LibreOffice.

FOSDEM 2011 slide & latest updates

I’ve just uploaded the slide for my talk during FOSDEM 2011 here. It was very nice to be able to talk about our somewhat ambitious plan to bring LibreOffice Calc to the next level. Also, I regret that I haven’t been able to blog about what’s been going on lately; lots of time spent on writing, reviewing code, fixing bugs and integrating patches, and sadly little time is left on writing blogs.

Having said all that, let me talk about a few things that are new on the master branch (since I’m already in the writing mode).

The first one is the new move/copy sheet dialog

new-copy-move-sheet-dialog

which is based on the design suggestion from Christoph Noak and coded by Joost Eekhoorn. The idea is to provide a quick way to rename a copied sheet, and also to make the layout more ergonomic and more appropriate to modern HIG. There are still some minor issues that we have yet to work out, but this is a step in the right direction.

The second one is related to DataPilot. In fact there are two new enhancements landed on master with regard to DataPilot.

The first enhancement is the support for unlimited number of fields. Previously, DataPilot could only support up to 8 fields in each dimension (page, column, row and data). But now you can define as many fields in each dimension as you desire, provided that you have enough memory and CPU cycles to handle extra load.

calc-dp-unlimited-fields

The second DataPilot enhancement is the support for named range as the data source. Now, you can use a named range as the data source of a DataPilot table, instead of raw range reference. This has the advantage that, when your source range grows, you can simply update the named range and refresh the DataPilot table.

calc-dp-named-range-source

However, I have not yet added a way to create a new DataPilot table with a named range as data source. I will work on that sometime soon, hopefully in time for our 3.4 release.

Other than that, I’ve fixed quite a number of bugs and added performance enhancements particularly with regard to external reference handling. Still, there are lots of other tasks I need to do on master before we hit the 3.4 release. Stay tuned for more updates.

Key binding compatibility options (take 2)

This post is a sequel to this previous post, so refer to that post for the detail of what I’ve been working on.

Anyway, I have settled with the following Compatibility option page:

calc-compat-option

which should be just adequate for what it needs to do without being too annoying.

Also, just for the matter of documenting its behavior, the following chart shows what actions are associated with what key bindings for the two key binding types (Default and OpenOffice.org legacy):

Key Binding Default OpenOffice.org legacy
Backspace delete contents delete
Delete delete delete contents
Ctrl-D fill down data select
Shift-Ctrl-D data select

where the actions are

  • delete contents – launch the delete contents dialog.
  • delete – immediately delete the cell content, without the dialog.
  • fill down – fill cell content downward within selection.
  • data select – launch the selection list dialog.

Note that all the other key bindings are left untouched. Also, the list of key bindings that can get reset by this functionality may grow in future releases.

Key binding compatibility options

As I posted on the libreoffice development list, I’m currently working on adding a new option page in the Options dialog, to provide a quick way to switch key bindings between LibreOffice’s default and OpenOffice.org’s for Calc. For the most part, the default key bindings are identical between LibreOffice and OpenOffice.org as far as Calc’s concerned, but there are some differences, which are enough to annoy those users who are accustomed to the old key bindings from OOo Calc.

The option page will look something like this:
calc-key-binding-compat

but it will not look exactly like this since this is just my first viewable version with no design effort put into it at all. So, hopefully we can morph this into something a little more usable and a little more bearable.

Right now, we only deal with key binding compatibility; however, we may expand this page to incorporate other compatibility options in future versions of LibreOffice.

Stay tuned!

mso-dumper now packaged in OBS

I’m happy to announce that the mso-dumper tool is now packaged in the openSUSE build service under my home repository. This tool is written in Python, and allows you to dump the contents of MS Office documents stored in the BIFF-structured binary file format in a more human readable fashion. It is an indispensable tool when dealing with importing from and/or exporting to the Office documents. Right now, only Excel and Power Point formats are supported.

This package provides two new commands xls-dump and ppt-dump. If you wish to dump the content of an Excel document, all you have to do is

xls-dump ./path/to/mydoc.xls

and it dumps its content to standard output. What the output looks like depends on what’s stored with the document, but it will look something like this:

...
0085h: =============================================================
0085h: BOUNDSHEET - Sheet Information (0085h)
0085h:   size = 14
0085h: -------------------------------------------------------------
0085h: B4 09 00 00 00 00 06 00 53 68 65 65 74 31 
0085h: -------------------------------------------------------------
0085h: BOF position in this stream: 2484
0085h: sheet name: Sheet1
0085h: hidden state: visible
0085h: sheet type: worksheet or dialog sheet

008Ch: =============================================================
008Ch: COUNTRY - Default Country and WIN.INI Country (008Ch)
008Ch:   size = 4
008Ch: -------------------------------------------------------------
008Ch: 01 00 01 00 

00EBh: =============================================================
00EBh: MSODRAWINGGROUP - Microsoft Office Drawing Group (00EBh)
00EBh:   size = 90
00EBh: -------------------------------------------------------------
00EBh: 0F 00 00 F0 52 00 00 00 00 00 06 F0 18 00 00 00 
00EBh: 02 04 00 00 02 00 00 00 02 00 00 00 01 00 00 00 
00EBh: 01 00 00 00 02 00 00 00 33 00 0B F0 12 00 00 00 
00EBh: BF 00 08 00 08 00 81 01 09 00 00 08 C0 01 40 00 
00EBh: 00 08 40 00 1E F1 10 00 00 00 0D 00 00 08 0C 00 
00EBh: 00 08 17 00 00 08 F7 00 00 10 

00FCh: =============================================================
00FCh: SST - Shared String Table (00FCh)
00FCh:   size = 8
00FCh: -------------------------------------------------------------
00FCh: 00 00 00 00 00 00 00 00 
00FCh: -------------------------------------------------------------
00FCh: total number of references: 0
00FCh: total number of unique strings: 0
...

I have originally written this tool to deal with the Excel import and export part of Calc’s development, and continue to develop it further. Thorsten Behrens has later joined forces and added support for the Power Point format. Right now, I’m working on adding an XML output format option to make it easier to compare outputs, which is important for regression testing.

Increasing Calc’s row limit to 1 million

Introduction

With the child work space (CWS) koheirowlimitperf being marked ready for QA, I believe this is a good time to talk about the change that the CWS will bring once it gets integrated.

The role of this CWS is to upstream various pieces of performance optimization from Go-OO, that arose from the increase of the row limit from 65536 (64 thousand rows) to 1048576 (1 million rows). However, the upstream build will not see the increase of the row limit itself yet, as the upstream developers still consider that move premature due to two outstanding issues that are show stoppers for them. I’ll talk more about those issues later.

What this CWS does change is the storage of row attributes to 1) improve performance of querying the attributes, and to 2) make extra information available that can be used to make the algorithm of various bits of operations more efficient. The CWS also makes several other changes in order to improve performance in general, though not related to the change in the row attribute storage.

Limitation of the old attribute storage

Before I talk about how the row attributes are stored in the new storage, I’d like to talk about the limitation of the old attribute storage, and why it was not adequate when the row limit was raised to 1 million rows. Also, in this article, I’ll only talk about row attribute storage, but the same argument also applies to the column attribute storage as well.
The old attribute container was designed to store several different attributes altogether, namely,

  • hidden state,
  • filtered state,
  • automatic page break position,
  • manual page break position, and
  • whether or not a row has a manual height.

They were stored per range, not per individual row or column, so that if a range of rows had identical set of attribute values over the entire range, that attribute set would be stored as a single record. Searching for the value of an attribute for an arbitrary row was performed linearly from the first record since the core of the container itself was simply just an array.

There were primarily two problems with this storage scheme that made the container non-scalable. First, the attributes stored together in this container had different distribution patterns, which caused over-partitioning of the container and unnecessarily slowed down the queries of all stored attributes.

For instance, the hidden and filtered attributes are distributed in a very similar manner, but the manual height attribute is not necessarily distributed in a manner similar to these attributes. Because of this, storing that together with the hidden and filtered attributes unnecessarily increased the partition count, which in turn would slow down the query speed of all three attributes.

Even more problematic was the automatic page break attributes; because the automatic page breaks always need to be set for the entire sheet, increasing the row limit significantly raised the partition count. On top of that, the page breaks themselves are actually single-row attributes; it made little sense to store them in a container that was range-based.

This over-partitioning problem led to the second problem; when the container was over-partitioned, querying for an attribute value would become very slow due to the linear search algorithm used in the query, and this algorithm scales with the number of partitions. Because row attributes are used extensively in many areas of Calc’s operations, and often times in loops, the degradation of its lookup performance caused all sorts of interesting performance problems when the row limit was raised to 1 million.

New way of storing row attributes

Separation of row attributes

The first step in speeding up storage and lookup of row attributes is to separate them into own containers, to avoid the storage of one attribute affecting the storage of another. It was natural to use a range-based container to store the hidden, filtered and the manual height attributes, since these attributes typically span over many consecutive rows. The page break positions, on the other hand, should be stored as point values as opposed to range values since they rarely occur in ranges and they are always set to individual rows.

I picked STL’s std::set container to store the automatic and manual page break positions (they are stored separately in two set containers). That alone resulted in significantly speeding up the sheet pagination performance, whose performance previously suffered due to the poor storage speed of the old container. Later on I improved the pagination performance even further by modifying the pagination algorithm itself, but more on that later.

For the hidden and filtered attributes, I picked a data structure that I call the flat segment tree, which I designed and implemented specifically for this purpose. Row heights are also stored in the flat segment tree.

Flat segment tree

I named this data structure “flat segment tree” because it is a modified version of a data structure known as the segment tree, and unlike the original segment tree which supports storage of overlapping ranges, my version only stores non-overlapping ranges, hence the name “flat”. The structure of the flat segment tree largely resembles that of the original segment tree; it consists of a balanced binary tree with its leaf nodes storing the values while its non-leaf nodes storing auxiliary data used only for querying purposes. The leaf nodes are doubly-linked, allowing them a quick access to their neighboring nodes. Since ranges never overlap with each other in this data structure, one leaf node represents the end of a preceding range and the start of the range that follows. Last but not least, this data structure is a template, and allows you to specify the types of both key and value.

flat-segment-tree-lookup

There are three advantages of using this data structure: 1) compactness of storage since only the range boundaries are stored as nodes, 2) reasonably fast lookup thanks to its tree structure, and 3) a single query of a stored value also returns the lower and upper boundary positions of that range with no additional overhead. The last point is very important which I will explain later in the next section.

As an additional rule, the flat segment tree guarantees that the values of adjacent ranges are always different. There is no exception to this rule, so you can take advantage of this when you use this structure in your code.

Also, please do keep in mind that, while the lookup of a value is reasonably fast, it is not without an overhead. So, you are discouraged to perform, say, lookup for every single row when you are iterating through a series of rows in a loop. Instead, do make use of the range boundary info judiciously to skip ahead in such situation.

This data structure is distributed independently of the OOo code base, licensed under MIT/X11. You can find the source code at http://code.google.com/p/multidimalgorithm/. That project includes other data structures than the flat segment tree; however, only the flat segment tree is currently usable by 3rd party programs; the implementations of other structures are still in an experimental stage, and need to be properly templetized before becoming usable in general. Even the flat segment tree is largely undocumented. This is intentional since the API is still not entirely frozen and is subject to change in future versions. You have been warned.

Loop count reduction

Aside from the aforementioned improvement associated with the row attribute storage, I also worked on improving various algorithms used throughout Calc’s core, by taking advantage of one feature of the new data structure.
loop-for-each-row
As I mentioned earlier, the flat segment tree returns the lower and upper boundary positions of the range as part of a normal value query. You can make use of this extra piece of information to significantly reduce the number of loops in an algorithm that loops through a wide row range. Put it another way, since you already know the attribute value associated with that range, and you also know the start and end positions of that range, you don’t need to query the value for every single row position within that range, thus reducing the number of iterations in the loop. And the reduction of the loop count means the reduction of the time required to complete that operation, resulting in a better performance.

That’s the gist of the performance improvement work I did in various parts of Calc, though there were slight variations depending on which part of Calc’s code I worked on. In summary, the following areas received significant performance improvement:

  • Sheet pagination, which consisted of loops in which numerous calls are made to query row’s hidden states and row height values.
  • Print preview, mostly due to the improvement of the pagination performance.
  • Calculation of drawing object’s vertical position

I’m sure there are other areas where the performance still needs improvement. As this is an on-going effort, we will work on resolving any other outstanding issues as we discover them.

Other related work

In addition to the re-work of the row attribute storage and the performance improvement involving the row attribute queries, I’ve also made other changes to improve performance and ensure that Calc’s basic usability is not sacrificed.

Removal of redundant pagination

Prior to my row limit increase work, Calc would re-calculate page break positions again and again even when no changes were made to the document that would alter page break positions, such as changing row heights, filtering rows, inserting manual page breaks, and so on. Because the pagination operation became much more expensive after the row limit increase, I have decided to remove this redundancy so that the re-pagination is done only when necessary. This change especially made huge impact in print preview performance, since (for whatever reason) Calc was performing full pagination every time you move the mouse cursor within the preview pane, even when the movement was only by one pixel! Removal of such redundant re-pagination has brought sanity back to the print preview experience.

Efficient zoom level calculation

The row limit increase also caused the performance degradation of calculating the correct zoom size to fit the document within specified page size. Calc does this when you specify your document to “fit within n pages wide and y pages tall” or “fit to n pages in total”. The root cause was again in the degraded performance in pagination. This time, however, I could not use the trick of “performing pagination only once”, because we do need to perform full pagination continuously at different zoom levels in order to find a correct zoom level.

The solution I employed was to reduce the number of re-pagination by using the bisection method to arrive at the correct zoom level. The old code worked like this:

  1. Initialize the zoom level to 100%, and perform full pagination.
  2. If that doesn’t fit the required page size, decrement the zoom level by 1%, and perform full pagination once again.
  3. If that doesn’t fit, decrement the zoom level by 1%, and try again.
  4. Continue this until the correct zoom level is reached.

Of course, if the correct zoom level is far below the initial value of 100%, this algorithm is not very efficient. If the desired zoom level is 35%, for example, Calc would need to perform full pagination 66 times. Switching to the bisection method here reduced the full pagination count roughly down to the neighborhood of 5 or 6. At the time I worked on this, each full pagination took about 1 second. So the reduction of pagination count from 66 to 5 roughly translated to the reduction of the zoom level calculation from 1 minute to 5 seconds. Suffice it to say that this made a big difference.

Even better news is that the performance of this operation is much faster today, thanks to the improvement I made in the pagination performance in general.

Calculation of autofill marker position

autofill
When making a selection, Calc puts the little square at the lower-right corner of the selection. That’s called an autofill marker, and it’s there to let you drag selection to fill values.

Interestingly, calculating its position (especially its vertical position) turned out to be a very slow operation when the marker was positioned close to the bottom of the sheet. Worse is the fact that Calc calculated its position even when it was outside the visible area. The slowdown caused by this was apparent especially when making column selection. Because selecting columns always places the autofill marker at the last row of the sheet, increasing the row limit made that process sluggish. The solution was to simply detect whether the autofill marker is outside the visible area, and if it is, skip calculating its position (since there is no point calculating its position if we don’t need to display it). That made the process of column selection back to normal again.

However, the sluggishness of making selection can still manifest itself under the right (wrong?) condition even with this change. We still need to speed up the calculation of its vertical position, by improving the calculation algorithm itself.

Show stoppers for the upstream build

I sat down and briefly discussed with Niklas Nebel and Eike Rathke, Sun’s Calc co-leads, when we met during last year’s OOoCon in Orvieto, about the possibility of increasing the row limit in the upstream version of Calc. During our discussion, I was told that, in addition to the general performance issues most of which I’ve already resolved, we will need to resolve at least two more outstanding issues before they can set the row limit to 1 million in the upstream build.

First, we need to improve the performance of the formula calculation and the value change propagation mechanism (that we call “broadcasting”). The existing implementation is still tuned for the grid size of 65536 rows; we need to re-tune that for 1 million rows and to ensure that the performance will not suffer after the row limit increase.

cell-note-misplaced

Second, we need to resolve the incorrect positioning of drawing objects at higher row positions. This one is somewhat tricky, since the drawing objects are drawn entirely independent of the sheet grid, and the coerce resolution of the drawing layer causes the vertical position of a drawing object to deviate from its intended position. Generally speaking, the higher the row position the more deviation results. With the maximum of 65536 rows, however, it was not such a big issue since the amount of deviation was barely noticeable even at the highest row position. But because the problem becomes much more noticeable with 1 million rows, this needs to be addressed somehow.

Going forward…

Going forward, I will continue to hunt for the remaining performance issues, and squash them one by one. The major ones should all be resolved by now, so what’s remaining should be some corner case issues, performance-wise. As for the two outstanding issues I mentioned in the previous section, we will have to take a good look at them at some point. Whether or not they are really show stoppers is somewhat subject to personal view point, but they are real issues needing resolution, for sure, no matter what their perceived severity is.

Also, as of this writing, the manual row size attribute is still stored in the old, array-based container. It will probably make sense to migrate that to the flat segment tree, so that we can eliminate the old container once and for all, and have a fresh start with the new container. Having said that, doing so would require another round of refactoring of non-trivial scale, it should be conducted with care and proper testing.

The ODS export filter still needs re-work. Currently, all row attributes which are now stored separately, are temporarily merged back into the old array-based container before exporting the document to ODS. The reason is that the ODS export filter code still expects the partitioning behavior of the old container during the export of row styles. In order to fully embrace the new storage of row attributes, that code needs to be adjusted to work with the new storage scheme. Again, this will require a non-trivial amount of code change, thus should be conducted with care.

Calculation of vertical position of various objects, such as the autofill marker can still use some algorithmic improvement. We can make them more efficient by taking advantage of the flat segment tree in a way similar to how the pagination algorithm was made more efficient.

Conclusion

This concludes my write-up on the current status of Calc’s row limit increase work. I hope I’ve made it clear that work is underway toward making that happen without degrading Calc’s basic usability. As a matter of fact, the row limit has already been increased to 1 million in some variants of OOo, such as Go-OO. I believe we’ll be able to increase the row limit in the upstream version in the not-so-distant future as long as we keep working at the remaining issues.

That’s all I have to say for now. Thank you very much, ladies and gentlemen.

Inserting current date and time in one step

current-date-time
Here is another simple feature that may come in handy.

With the change I just checked into ooo-build master, you can now insert current date and time with just one key stroke. By default, Ctrl+; (semicolon) is bound to current date, while Ctrl+Shift+; is bound to current time. But these key bindings are configurable in case you don’t like these default bindings.