Infinity Scroll
The Infinity Scroll feature enables dynamic data loading as users scroll through the grid, efficiently managing large datasets by loading data in chunks and cleaning up unused data to optimize memory usage.
Key Features
- Dynamic data loading based on scroll position
- Pre-loads data ahead of scroll direction
- Efficient memory management by cleaning up off-screen data
- Configurable chunk size and buffer sizes
- Support for both known and unknown total data sizes
Loading...
Basic Setup
To enable infinity scroll in your grid:
import { InfinityScrollPlugin } from '@revolist/revogrid-plugin-infinity-scroll';
const grid = document.createElement('revo-grid');grid.plugins = [InfinityScrollPlugin];
grid.additionalData = { infinityScroll: { chunkSize: 50, // Number of rows per chunk bufferSize: 100, // How many rows to keep in buffer preloadThreshold: 0.75, // When to start loading more data (0-1) total: 1000, // Optional: total number of rows loadData: async (skip, limit, order, filter) => { // Fetch data from your API const response = await fetch(`/api/data?skip=${skip}&limit=${limit}&order=${JSON.stringify(order)}&filter=${JSON.stringify(filter)}`); return response.json(); } }};
Configuration Options
Option | Type | Default | Description |
---|---|---|---|
chunkSize | number | undefined | Number of rows to load in each chunk. Can be set dynamically by the grid. |
bufferSize | number | undefined | Number of rows to keep in memory buffer. Can be set dynamically by the grid. |
preloadThreshold | number | 0.75 | When to trigger loading next chunk (0-1) |
total | number | undefined | Total number of rows. If not provided, the plugin will prefill the source on the “fly” - scroll will be growing as you scroll. |
loadData | function | required | Function to load data chunks |
Memory Management
The plugin automatically manages memory by:
- Loading new chunks of data as the user scrolls
- Maintaining a buffer of rows before and after the visible area
- Cleaning up data that’s far from the current viewport
Best Practices
-
Choose Appropriate Chunk Size
- Smaller chunks mean more frequent loading but less memory usage
- Larger chunks mean fewer API calls but more memory usage
-
Configure Buffer Size
- Larger buffers provide smoother scrolling but use more memory
- Smaller buffers save memory but might cause more loading events
-
Optimize Data Loading
- Implement server-side pagination in your API
- Return only necessary data fields
- Consider data compression for large datasets
-
Handle Loading States
- Show loading indicators during data fetching
- Handle errors gracefully
- Consider implementing retry logic for failed requests
Example with Loading State
grid.additionalData = { infinityScroll: { chunkSize: 50, bufferSize: 100, loadData: async (skip, limit, order, filter) => { try { // Show loading state const response = await fetch(`/api/data?skip=${skip}&limit=${limit}&order=${JSON.stringify(order)}&filter=${JSON.stringify(filter)}`); const data = await response.json(); // Hide loading state return data; } catch (error) { console.error('Failed to load data:', error); return []; // Return empty array on error } finally { // Hide loading state } } }};
This implementation provides efficient handling of large datasets while maintaining optimal performance and user experience.