Package processing of mass data with COMMIT WORK and SELECT statement

Here a solution to process big table in case when you can't use an index (for example you are looking for particular values and you have to scan the whole table). The runtime will be of course long and normally your program will be aborted because of parameter rdisp/max_wprun_time .
With this template you can avoid such problem. The approach is to collect the table keys, read data in packets using this keys and do commit work between packets.
TYPES:
* keys of the table TBTCO (Job Status Overview Table), is probably big in you system
* it's not necessary to specify all primary keys,
* just specify as many primary keys without gaps as needed
* to read data in meaningful packets, for table BKPF the fields MANDT and BUKRS could not be enough
  BEGIN OF ts_keys,
    jobname  TYPE tbtco-jobname,
    jobcount TYPE tbtco-jobcount,
  END OF ts_keys,
* this table must not be a hashed table, but it's not an overhead and could be useful
  tt_keys TYPE HASHED TABLE OF ts_keys WITH UNIQUE KEY jobname jobcount,

  ts_data TYPE tbtco,
* this table must not be a hashed table, but it's not an overhead and could be useful
  tt_data TYPE HASHED TABLE OF tbtco WITH UNIQUE KEY jobname jobcount.

DATA:
  lt_keys           TYPE tt_keys,
  ls_keys           TYPE ts_keys,
  lv_keys_count     TYPE i,
  lt_entries        TYPE tt_keys,
  lt_data           TYPE tt_data,
  ls_data           TYPE ts_data,
  lt_all_data       TYPE tt_data,
  lv_all_data_count TYPE i,
  lv_count          TYPE i,
  lv_packet_size    TYPE i.

* create internal table with keys
SELECT DISTINCT jobname jobcount
  INTO TABLE lt_keys
  FROM tbtco.

lv_keys_count = lines( lt_keys ).
lv_packet_size = 1000.
lv_count = 0.

LOOP AT lt_keys INTO ls_keys.

* create internal table for SELECT ... FOR ALL ENTRIES IN
  INSERT ls_keys INTO TABLE lt_entries.
  ADD 1 TO lv_count.

  IF   lv_count MOD lv_packet_size = 0    " packet is full
    OR lv_count = lv_keys_count.          " or last row (process last packet, maybe not full)

    SELECT *
      INTO TABLE lt_data
      FROM tbtco
      FOR ALL ENTRIES IN lt_entries WHERE jobname  = lt_entries-jobname
                                      AND jobcount = lt_entries-jobcount.
*                                     AND ... " add conditions with non-index fiels here

    COMMIT WORK. " do commit to avoid rdisp/max_wprun_time

    INSERT LINES OF lt_data INTO TABLE lt_all_data.
    CLEAR lt_entries.
  ENDIF.
ENDLOOP.

*LOOP AT lt_all_data INTO ls_data.
*  WRITE: / ls_data-jobname, ls_data-jobcount.
*ENDLOOP.

lv_all_data_count = lines( lt_all_data ).
WRITE: / lv_keys_count.
WRITE: / lv_all_data_count.
See other related notes in my infodepot:
Commit Work in SELECT statement Information about amount of read records returned by SELECT, SELECT SINGLE, SELECT COUNT, etc. Package processing of mass data with database commit and SELECT statement SQL inner join vs. join of internal tables Select-Options in dynamic WHERE condition called per RFC
Full list of examples in my infodepot

If you have a question, have found an error or just want to contact me, please use this form.

Copyright (C) 2014 http://www.kerum.pl/infodepot/

Disclaimer: I am not affiliated or related to any division or subsidiary of SAP AG.
Trademarks or registered trademarks of any products or companies referred to on this site belong to those companies.
Anyone using the given solutions, is doing it under his/her own responsibility and at own risk.