- 
        +a size
- 
        Suggested stack size, in kilowords, for threads in the
          async thread pool. Valid range is 16-8192 kilowords. The
          default suggested stack size is 16 kilowords, that is, 64
          kilobyte on 32-bit architectures. This small default size
          has been chosen because the number of async threads can
          be large. The default size is enough for drivers
          delivered with Erlang/OTP, but might not be large
          enough for other dynamically linked-in drivers that use the
          
          driver_async() functionality.
          Notice that the value passed is only a suggestion,
          and it can even be ignored on some platforms. 
- +A size
- 
        Sets the number of threads in async thread pool. Valid range
          is 1-1024. The async thread pool is used by linked-in drivers to
          handle work that may take a very long time. Since OTP 21 there are
          very few linked-in drivers in the default Erlang/OTP distribution
          that uses the async thread pool. Most of them have been migrated to
          dirty IO schedulers. Defaults to 1. 
- +B [c | d | i]
- 
        Option c makes Ctrl-C
          interrupt the current shell instead of invoking the emulator break
          handler. Option d (same as specifying
          +B without an extra option) disables the break
          handler. Option i makes the emulator ignore any
          break signal. If option c is used with
          oldshell on Unix, Ctrl-C will
          restart the shell process rather than interrupt it. Notice that on Windows, this flag is only applicable for
          werl, not erl
          (oldshell). Notice also that
          Ctrl-Break is used instead of
          Ctrl-C on Windows. 
- +c true | false
- 
        Enables or disables
          time
          correction: 
	  - true
- Enables time correction. This is the default if
            time correction is supported on the specific platform.
- false
- Disables time correction.
 For backward compatibility, the boolean value can be omitted.
          This is interpreted as +c false. 
- +C no_time_warp | single_time_warp |
        multi_time_warp
- 
        Sets time warp
          mode: 
        - no_time_warp
- 
            No time warp mode (the default)
- single_time_warp
- 
            Single time warp mode
- multi_time_warp
- 
            Multi-time warp mode
 
- +d
- 
        If the emulator detects an internal error (or runs out of memory),
          it, by default, generates both a crash dump and a core dump.
          The core dump is, however, not very useful as the content
          of process heaps is destroyed by the crash dump generation. Option +d instructs the emulator to produce only a
          core dump and no crash dump if an internal error is detected. Calling 
          erlang:halt/1 with a string argument still
          produces a crash dump. On Unix systems, sending an emulator process
          a SIGUSR1 signal also forces a crash dump. 
- +dcg DecentralizedCounterGroupsLimit
- 
        Limits the number of decentralized counter groups used by
           decentralized counters optimized for update operations in the
           Erlang runtime system. By default, the limit is 256. When the number of schedulers is less than or equal to the
           limit, each scheduler has its own group. When the
           number of schedulers is larger than the groups limit,
           schedulers share groups. Shared groups degrade
           the performance for updating counters while many reader groups
           degrade the performance for reading counters. So, the limit is a tradeoff
           between performance for update operations and performance for
           read operations. Each group consumes 64 bytes in each
           counter. Notice that a runtime system using decentralized
           counter groups benefits from binding
           schedulers to logical processors, as the groups are
           distributed better between schedulers with this option. This option only affects decentralized counters used for
           the counters that are keeping track of the memory consumption
           and the number of terms in ETS tables of type ordered_set with
           the write_concurrency option activated. 
- +e Number
- 
        Sets the maximum number of ETS tables. This limit is
	partially obsolete.
	 
- +ec
- 
        Forces option compressed on all ETS tables.
          Only intended for test and evaluation. 
- 
        +fnl
- 
        The virtual machine works with filenames as if they are encoded
          using the ISO Latin-1 encoding, disallowing Unicode characters with
          code points > 255. For more information about Unicode filenames, see section
          Unicode
          Filenames in the STDLIB User's Guide. Notice that
          this value also applies to command-line parameters and environment
          variables (see section 
          Unicode in Environment and Parameters in the STDLIB
          User's Guide). 
- +fnu[{w|i|e}]
- 
        The virtual machine works with filenames as if they are encoded
          using UTF-8 (or some other system-specific Unicode encoding). This is
          the default on operating systems that enforce Unicode encoding, that
          is, Windows MacOS X and Android. The +fnu switch can be followed by w, i, or
          e to control how wrongly encoded filenames are to be
          reported: 
          - 
            w means that a warning is sent to the error_logger
              whenever a wrongly encoded filename is "skipped" in directory
              listings. This is the default. 
- 
            i means that those wrongly encoded filenames are silently
              ignored. 
- 
            e means that the API function returns an error whenever a
              wrongly encoded filename (or directory name) is encountered. 
 Notice that 
          file:read_link/1 always returns an error if the link
          points to an invalid filename. For more information about Unicode filenames, see section
          Unicode
          Filenames in the STDLIB User's Guide. Notice that
          this value also applies to command-line parameters and environment
          variables (see section 
          Unicode in Environment and Parameters in the STDLIB
          User's Guide). 
- +fna[{w|i|e}]
- 
        Selection between +fnl and +fnu is done based
          on the current locale settings in the OS. This means that if you
          have set your terminal for UTF-8 encoding, the filesystem is
          expected to use the same encoding for filenames. This is the default
          on all operating systems, except Android, MacOS X and Windows. The +fna switch can be followed by w, i, or
          e. This has effect if the locale settings cause the behavior
          of +fnu to be selected; see the description of +fnu
          above. If the locale settings cause the behavior of +fnl to be
          selected, then w, i, or e have no effect. For more information about Unicode filenames, see section
          Unicode
          Filenames in the STDLIB User's Guide. Notice that
          this value also applies to command-line parameters and environment
          variables (see section 
          Unicode in Environment and Parameters in the STDLIB
          User's Guide). 
- +hms Size
- 
        Sets the default heap size of processes to the size
          Size. 
- +hmbs Size
- 
        Sets the default binary virtual heap size of processes to the size
          Size. 
- +hmax Size
- 
        Sets the default maximum heap size of processes to the size
          Size. Defaults to 0, which means that no
          maximum heap size is used. For more information, see
          
          process_flag(max_heap_size, MaxHeapSize). 
- +hmaxel true|false
- 
        Sets whether to send an error logger message or not for processes
          reaching the maximum heap size. Defaults to true.
          For more information, see
          
          process_flag(max_heap_size, MaxHeapSize). 
- +hmaxk true|false
- 
        Sets whether to kill processes reaching the maximum heap size or not.
          Default to true. For more information, see
          
          process_flag(max_heap_size, MaxHeapSize). 
- +hpds Size
- 
        Sets the initial process dictionary size of processes to the size
          Size. 
- +hmqd off_heap|on_heap
- 
        Sets the default value of the message_queue_data process flag.
          Defaults to on_heap. If +hmqd is not
          passed, on_heap will be the default. For more information, see
          
          process_flag(message_queue_data, MQD). 
- +IOp PollSets
- 
        Sets the number of IO pollsets to use when polling for I/O.
          This option is only used on platforms that support concurrent
          updates of a pollset, otherwise the same number of pollsets
          are used as IO poll threads.
          The default is 1.
         
- +IOt PollThreads
- 
        Sets the number of IO poll threads to use when polling for I/O.
          The maximum number of poll threads allowed is 1024. The default is 1.
         A good way to check if more IO poll threads are needed is to use
          microstate accounting
          and see what the load of the IO poll thread is. If it is high it could
          be a good idea to add more threads. 
- +IOPp PollSetsPercentage
- 
        Similar to +IOp but uses
          percentages to set the number of IO pollsets to create, based on the
          number of poll threads configured. If both +IOPp and +IOp
          are used, +IOPp is ignored.
         
- +IOPt PollThreadsPercentage
- 
        Similar to +IOt but uses
          percentages to set the number of IO poll threads to create, based on
          the number of schedulers configured. If both +IOPt and
          +IOt are used, +IOPt is ignored.
         
- +JPperf true|false
- 
        Enables or disables support for the `perf` profiler when running
          with the JIT on Linux. Defaults to false. For more details about how to run perf see the
          perf support
          section in the BeamAsm internal documentation.
         
- +L
- 
        Prevents loading information about source filenames and line
          numbers. This saves some memory, but exceptions do not contain
          information about the filenames and line numbers. 
- +MFlag Value
- 
        Memory allocator-specific flags. For more information, see
          erts_alloc(3). 
- 
        +pc Range
- 
        Sets the range of characters that the system considers printable in
          heuristic detection of strings. This typically affects the shell,
          debugger, and io:format functions (when ~tp is used in
          the format string). Two values are supported for Range: 
          - latin1
- The default. Only characters in the ISO Latin-1 range can be
            considered printable. This means that a character with a code point
            > 255 is never considered printable and that lists containing
            such characters are displayed as lists of integers rather than text
            strings by tools.
- unicode
- All printable Unicode characters are considered when
            determining if a list of integers is to be displayed in
            string syntax. This can give unexpected results if, for
            example, your font does not cover all Unicode characters.
 See also 
          io:printable_range/0 in STDLIB. 
- +P Number
- 
	Sets the maximum number of simultaneously existing processes for this
          system if a Number is passed as value. Valid range for
	Number is [1024-134217727] NOTE: The actual maximum chosen may be much larger than
	the Number passed. Currently the runtime system often,
	but not always, chooses a value that is a power of 2. This might,
	however, be changed in the future. The actual value chosen can be
	checked by calling
	erlang:system_info(process_limit). The default value is 262144 
- +Q Number
- 
	Sets the maximum number of simultaneously existing ports for this
          system if a Number is passed as value. Valid range for Number
	is [1024-134217727] NOTE: The actual maximum chosen may be much larger than
	the actual Number passed. Currently the runtime system often,
	but not always, chooses a value that is a power of 2. This might,
	however, be changed in the future. The actual value chosen can be
	checked by calling
	erlang:system_info(port_limit). The default value used is normally 65536. However, if
	the runtime system is able to determine maximum amount of file
	descriptors that it is allowed to open and this value is larger
	than 65536, the chosen value will increased to a value
	larger or equal to the maximum amount of file descriptors that
	can be opened. On Windows the default value is set to 8196 because the
	normal OS limitations are set higher than most machines can handle. 
- +R ReleaseNumber
- 
        Sets the compatibility mode. The distribution mechanism is not backward compatible by
          default. This flag sets the emulator in compatibility mode
          with an earlier Erlang/OTP release ReleaseNumber.
          The release number must be in the range
          <current release>-2..<current release>. This
          limits the emulator, making it possible for it to communicate
          with Erlang nodes (as well as C- and Java nodes) running that
          earlier release. 
Note 
           Ensure that all nodes (Erlang-,  C-, and Java nodes) of
            a distributed Erlang system is of the same Erlang/OTP release,
            or from two different Erlang/OTP releases X and Y, where
            all Y nodes have compatibility mode X. 
 
- +r
- 
        Forces ETS memory block to be moved on realloc. 
- +rg ReaderGroupsLimit
- 
        Limits the number of reader groups used by read/write locks
          optimized for read operations in the Erlang runtime system. By
          default the reader groups limit is 64. When the number of schedulers is less than or equal to the reader
          groups limit, each scheduler has its own reader group. When the
          number of schedulers is larger than the reader groups limit,
          schedulers share reader groups. Shared reader groups degrade
          read lock and read unlock performance while many
          reader groups degrade write lock performance. So, the limit is a
          tradeoff between performance for read operations and performance
          for write operations. Each reader group consumes 64 byte
          in each read/write lock. Notice that a runtime system using shared reader groups benefits from
          binding schedulers to logical
          processors, as the reader groups are distributed better
          between schedulers. 
- 
        +S Schedulers:SchedulerOnline
- 
        Sets the number of scheduler threads to create and scheduler threads
          to set online. The maximum for both values is 1024. If the Erlang
          runtime system is able to determine the number of logical processors
          configured and logical processors available, Schedulers
          defaults to logical processors configured, and
          SchedulersOnline defaults to logical processors available;
          otherwise the default values are 1. If the emulator detects that it
          is subject to a CPU
          quota, the default value for SchedulersOnline will
          be limited accordingly. 
          Schedulers can be omitted if :SchedulerOnline is not
          and conversely. The number of schedulers online can be changed at
          runtime through
          
          erlang:system_flag(schedulers_online,
          SchedulersOnline). If Schedulers or SchedulersOnline is specified as a
          negative number, the value is subtracted from the default number of
          logical processors configured or logical processors available,
          respectively. Specifying value 0 for Schedulers or
          SchedulersOnline resets the number of scheduler threads or
          scheduler threads online, respectively, to its default value. 
- +SP
        SchedulersPercentage:SchedulersOnlinePercentage
- 
        Similar to +S but uses
          percentages to set the number of scheduler threads to create, based
          on logical processors configured, and scheduler threads to set online,
          based on logical processors available.
          Specified values must be > 0. For example,
          +SP 50:25 sets the number of scheduler threads to 50% of the
          logical processors configured, and the number of scheduler threads
          online to 25% of the logical processors available.
          SchedulersPercentage can be omitted if
          :SchedulersOnlinePercentage is not and conversely. The number
          of schedulers online can be changed at runtime through
          
          erlang:system_flag(schedulers_online,
          SchedulersOnline). This option interacts with +S
          settings. For example, on a system with 8 logical cores configured
          and 8 logical cores available, the combination of the options
          +S 4:4 +SP 50:25 (in either order) results in 2 scheduler
          threads (50% of 4) and 1 scheduler thread online (25% of 4). 
- +SDcpu
        DirtyCPUSchedulers:DirtyCPUSchedulersOnline
- 
        Sets the number of dirty CPU scheduler threads to create and dirty
          CPU scheduler threads to set online.
          The maximum for both values is 1024, and each value is
          further limited by the settings for normal schedulers: 
          - The number of dirty CPU scheduler threads created cannot exceed
            the number of normal scheduler threads created.
- The number of dirty CPU scheduler threads online cannot exceed
            the number of normal scheduler threads online.
 For details, see the +S and
          +SP. By default, the number
          of dirty CPU scheduler threads created equals the number of normal
          scheduler threads created, and the number of dirty CPU scheduler
          threads online equals the number of normal scheduler threads online.
          DirtyCPUSchedulers can be omitted if
          :DirtyCPUSchedulersOnline is not and conversely. The number of
          dirty CPU schedulers online can be changed at runtime through
          
          erlang:system_flag(dirty_cpu_schedulers_online,
          DirtyCPUSchedulersOnline). The amount of dirty CPU schedulers is limited by the amount of
	  normal schedulers in order to limit the effect on processes
	  executing on ordinary schedulers. If the amount of dirty CPU
	  schedulers was allowed to be unlimited, dirty CPU bound jobs would
	  potentially starve normal jobs. Typical users of the dirty CPU schedulers are large garbage collections,
          json protocol encode/decoders written as nifs and matrix manipulation
          libraries. You can use msacc(3)
          in order to see the current load of the dirty CPU schedulers threads
          and adjust the number used accordingly. 
- +SDPcpu
        DirtyCPUSchedulersPercentage:DirtyCPUSchedulersOnlinePercentage
- 
        Similar to +SDcpu but
          uses percentages to set the number of dirty CPU scheduler threads to
          create and the number of dirty CPU scheduler threads to set online.
          Specified values must be
          > 0. For example, +SDPcpu 50:25 sets the number of dirty
          CPU scheduler threads to 50% of the logical processors configured
          and the number of dirty CPU scheduler threads online to 25% of the
          logical processors available. DirtyCPUSchedulersPercentage can
          be omitted if :DirtyCPUSchedulersOnlinePercentage is not and
          conversely. The number of dirty CPU schedulers online can be changed
          at runtime through
          
          erlang:system_flag(dirty_cpu_schedulers_online,
          DirtyCPUSchedulersOnline). This option interacts with +SDcpu settings. For example, on a
          system with 8 logical cores configured and 8 logical cores available,
          the combination of the options +SDcpu 4:4 +SDPcpu 50:25 (in
          either order) results in 2 dirty CPU scheduler threads (50% of 4) and
          1 dirty CPU scheduler thread online (25% of 4). 
- +SDio DirtyIOSchedulers
- 
        Sets the number of dirty I/O scheduler threads to create.
          Valid range is 1-1024. By
          default, the number of dirty I/O scheduler threads created is 10. The amount of dirty IO schedulers is not limited by the amount of
	  normal schedulers like the amount of
	  dirty CPU schedulers. This since only I/O bound work is
	  expected to execute on dirty I/O schedulers. If the user should schedule CPU
	  bound jobs on dirty I/O schedulers, these jobs might starve ordinary
	  jobs executing on ordinary schedulers. Typical users of the dirty IO schedulers are reading and writing to files. You can use msacc(3)
          in order to see the current load of the dirty IO schedulers threads
          and adjust the number used accordingly. 
- +sFlag Value
- 
        Scheduling specific flags. 
          - +sbt BindType
- 
            Sets scheduler bind type. Schedulers can also be bound using flag
              +stbt. The only
              difference between these two flags is how the following errors
              are handled: 
              - Binding of schedulers is not supported on the specific
                platform.
- No available CPU topology. That is, the runtime system was
                not able to detect the CPU topology automatically, and no
                user-defined CPU topology
                was set.
 If any of these errors occur when +sbt has been passed,
              the runtime system prints an error message, and refuses to
              start. If any of these errors occur when +stbt has been
              passed, the runtime system silently ignores the error, and
              start up using unbound schedulers. Valid BindTypes: 
              - u
- 
unbound - Schedulers are not bound to logical
                processors, that is, the operating system decides where the
                scheduler threads execute, and when to migrate them. This is
                the default.
              
- ns
- 
no_spread - Schedulers with close scheduler
                identifiers are bound as close as possible in hardware.
              
- ts
- 
thread_spread - Thread refers to hardware threads
                (such as Intel's hyper-threads). Schedulers with low scheduler
                identifiers, are bound to the first hardware thread of
                each core, then schedulers with higher scheduler identifiers
                are bound to the second hardware thread of each core,and so on.
              
- ps
- 
processor_spread - Schedulers are spread like
                thread_spread, but also over physical processor chips.
              
- s
- 
spread - Schedulers are spread as much as possible.
              
- nnts
- 
no_node_thread_spread - Like thread_spread,
                but if multiple Non-Uniform Memory Access (NUMA) nodes exist,
                schedulers are spread over one NUMA node at a time,
                that is, all logical processors of one NUMA node are bound
                to schedulers in sequence.
              
- nnps
- 
no_node_processor_spread - Like
                processor_spread, but if multiple NUMA nodes exist,
                schedulers are spread over one NUMA node at a time, that is,
                all logical processors of one NUMA node are bound to
                schedulers in sequence.
              
- tnnps
- 
thread_no_node_processor_spread - A combination of
                thread_spread, and no_node_processor_spread.
                Schedulers are spread over hardware threads across NUMA
                nodes, but schedulers are only spread over processors
                internally in one NUMA node at a time.
              
- db
- 
default_bind - Binds schedulers the default way.
                Defaults to thread_no_node_processor_spread
                (which can change in the future).
              
 Binding of schedulers is only supported on newer
              Linux, Solaris, FreeBSD, and Windows systems. If no CPU topology is available when flag +sbt
              is processed and BindType is any other type than
              u, the runtime system fails to start. CPU
              topology can be defined using flag
              +sct. Notice
              that flag +sct can have to be passed before flag
              +sbt on the command line (if no CPU topology
              has been automatically detected). The runtime system does by default not bind schedulers
              to logical processors. 
Note 
               If the Erlang runtime system is the only operating system
                process that binds threads to logical processors, this
                improves the performance of the runtime system. However,
                if other operating system processes (for example
                another Erlang runtime system) also bind threads to
                logical processors, there can be a performance penalty
                instead. This performance penalty can sometimes be
                severe. If so, you are advised not to
                bind the schedulers. 
 How schedulers are bound matters. For example, in
              situations when there are fewer running processes than
              schedulers online, the runtime system tries to migrate
              processes to schedulers with low scheduler identifiers.
              The more the schedulers are spread over the hardware,
              the more resources are available to the runtime
              system in such situations. 
Note 
               If a scheduler fails to bind, this is
                often silently ignored, as it is not always
                possible to verify valid logical processor identifiers. If
                an error is reported, it is reported to the
                error_logger. If you want to verify that the
                schedulers have bound as requested, call
                
                erlang:system_info(scheduler_bindings). 
 
- 
            +sbwt none|very_short|short|medium|long|very_long
- 
            Sets scheduler busy wait threshold. Defaults to medium.
              The threshold determines how long schedulers are to busy
              wait when running out of work before going to sleep. 
Note 
               This flag can be removed or changed at any time
                without prior notice. 
 
- 
            +sbwtdcpu none|very_short|short|medium|long|very_long
- 
            As +sbwt but affects
              dirty CPU schedulers. Defaults to short. 
Note 
               This flag can be removed or changed at any time
                without prior notice. 
 
- 
            +sbwtdio none|very_short|short|medium|long|very_long
- 
            As +sbwt but affects
              dirty IO schedulers. Defaults to short. 
Note 
               This flag can be removed or changed at any time
                without prior notice. 
 
- +scl true|false
- 
            Enables or disables scheduler compaction of load. By default
              scheduler compaction of load is enabled. When enabled, load
              balancing strives for a load distribution, which causes
              as many scheduler threads as possible to be fully loaded (that is,
              not run out of work). This is accomplished by migrating load
              (for example, runnable processes) into a smaller set of schedulers
              when schedulers frequently run out of work. When disabled,
              the frequency with which schedulers run out of work is
              not taken into account by the load balancing logic. +scl false is similar to
              +sub true, but
              +sub true also balances scheduler utilization
              between schedulers. 
- +sct CpuTopology
- 
            
              - 
<Id> = integer(); when 0 =< <Id> =< 65535
              
- <IdRange> = <Id>-<Id>
- <IdOrIdRange> = <Id> | <IdRange>
- <IdList> = <IdOrIdRange>,<IdOrIdRange> |
                <IdOrIdRange>
- <LogicalIds> = L<IdList>
- 
<ThreadIds> = T<IdList> | t<IdList>
              
- <CoreIds> = C<IdList> | c<IdList>
- 
<ProcessorIds> = P<IdList> | p<IdList>
              
- <NodeIds> = N<IdList> | n<IdList>
- 
<IdDefs> =
                <LogicalIds><ThreadIds><CoreIds><ProcessorIds><NodeIds> |
                <LogicalIds><ThreadIds><CoreIds><NodeIds><ProcessorIds>
              
- CpuTopology = <IdDefs>:<IdDefs> |
                <IdDefs>
 Sets a user-defined CPU topology. The user-defined
              CPU topology overrides any automatically detected
              CPU topology. The CPU topology is used when
              binding schedulers to logical
              processors. Uppercase letters signify real identifiers and lowercase
	      letters signify fake identifiers only used for description
              of the topology. Identifiers passed as real identifiers can
              be used by the runtime system when trying to access specific
              hardware; if they are incorrect the behavior is
              undefined. Faked logical CPU identifiers are not accepted,
              as there is no point in defining the CPU topology without
              real logical CPU identifiers. Thread, core, processor, and
              node identifiers can be omitted. If omitted, the thread ID
              defaults to t0, the core ID defaults to c0,
              the processor ID defaults to p0, and the node ID is
              left undefined. Either each logical processor must
              belong to only one NUMA node, or no logical
              processors must belong to any NUMA nodes. Both increasing and decreasing <IdRange>s
              are allowed. NUMA node identifiers are system wide. That is, each NUMA
              node on the system must have a unique identifier. Processor
              identifiers are also system wide. Core identifiers are
              processor wide. Thread identifiers are core wide. The order of the identifier types implies the hierarchy of the
              CPU topology. The valid orders are as follows: 
              - 
                <LogicalIds><ThreadIds><CoreIds><ProcessorIds><NodeIds>,
                  that is, thread is part of a core that is part of a processor,
                  which is part of a NUMA node. 
- 
                <LogicalIds><ThreadIds><CoreIds><NodeIds><ProcessorIds>,
                  that is, thread is part of a core that is part of a NUMA node,
                  which is part of a processor. 
 A CPU topology can consist of both processor external, and
              processor internal NUMA nodes as long as each logical processor
              belongs to only one NUMA node. If
              <ProcessorIds> is omitted, its default position
              is before <NodeIds>. That is, the default is
              processor external NUMA nodes. If a list of identifiers is used in an
              <IdDefs>: 
              - 
<LogicalIds> must be a list
                of identifiers.
- At least one other identifier type besides
                <LogicalIds> must also have a
                list of identifiers.
- All lists of identifiers must produce the
                same number of identifiers.
 A simple example. A single quad core processor can be
              described as follows: 
% erl +sct L0-3c0-3
1> erlang:system_info(cpu_topology).
[{processor,[{core,{logical,0}},
             {core,{logical,1}},
             {core,{logical,2}},
             {core,{logical,3}}]}]
A more complicated example with two quad core
              processors, each processor in its own NUMA node.
              The ordering of logical processors is a bit weird.
              This to give a better example of identifier lists: 
% erl +sct L0-1,3-2c0-3p0N0:L7,4,6-5c0-3p1N1
1> erlang:system_info(cpu_topology).
[{node,[{processor,[{core,{logical,0}},
                    {core,{logical,1}},
                    {core,{logical,3}},
                    {core,{logical,2}}]}]},
 {node,[{processor,[{core,{logical,7}},
                    {core,{logical,4}},
                    {core,{logical,6}},
                    {core,{logical,5}}]}]}]
As long as real identifiers are correct, it is OK
              to pass a CPU topology that is not a correct
              description of the CPU topology. When used with
              care this can be very useful. This
              to trick the emulator to bind its schedulers
              as you want. For example, if you want to run multiple
              Erlang runtime systems on the same machine, you
              want to reduce the number of schedulers used and
              manipulate the CPU topology so that they bind to
              different logical CPUs. An example, with two Erlang
              runtime systems on a quad core machine: 
% erl +sct L0-3c0-3 +sbt db +S3:2 -detached -noinput -noshell -sname one
% erl +sct L3-0c0-3 +sbt db +S3:2 -detached -noinput -noshell -sname two 
In this example, each runtime system have two
              schedulers each online, and all schedulers online
              will run on different cores. If we change to one
              scheduler online on one runtime system, and three
              schedulers online on the other, all schedulers
              online will still run on different cores. Notice that a faked CPU topology that does not reflect
              how the real CPU topology looks like is likely to
              decrease the performance of the runtime system. For more information, see
              
              erlang:system_info(cpu_topology). 
- +sfwi Interval
- 
            Sets scheduler-forced wakeup interval. All run queues are
              scanned each Interval milliseconds. While there are
              sleeping schedulers in the system, one scheduler is woken
              for each non-empty run queue found. Interval default
              to 0, meaning this feature is disabled. 
Note 
               This feature has been introduced as a temporary workaround
                for long-executing native code, and native code that does not
                bump reductions properly in OTP. When these bugs have been
                fixed, this flag will be removed. 
 
- +spp Bool
- 
            Sets default scheduler hint for port parallelism. If set to
              true, the virtual machine schedules port tasks when it
              improves parallelism in the system. If set to false, the
              virtual machine tries to perform port tasks immediately,
              improving latency at the expense of parallelism. Default to
              false. The default used can be inspected in runtime by
              calling 
              erlang:system_info(port_parallelism).
              The default can be overridden on port creation by passing option
              
              parallelism to
              
              erlang:open_port/2. 
- 
            +sss size
- 
            Suggested stack size, in kilowords, for scheduler threads.
              Valid range is 20-8192 kilowords. The default suggested
	      stack size is 128 kilowords. 
- 
            +sssdcpu size
- 
            Suggested stack size, in kilowords, for dirty CPU scheduler
	      threads. Valid range is 20-8192 kilowords. The default
	      suggested stack size is 40 kilowords. 
- 
            +sssdio size
- 
            Suggested stack size, in kilowords, for dirty IO scheduler
	      threads. Valid range is 20-8192 kilowords. The default
	      suggested stack size is 40 kilowords. 
- +stbt BindType
- 
            Tries to set the scheduler bind type. The same as flag
              +sbt except
              how some errors are handled. For more information, see
              +sbt. 
- +sub true|false
- 
            Enables or disables
              
              scheduler utilization balancing of load. By default
              scheduler utilization balancing is disabled and instead scheduler
              compaction of load is enabled, which strives for a load
              distribution that causes as many scheduler threads as possible
              to be fully loaded (that is, not run out of work). When scheduler
              utilization balancing is enabled, the system instead tries to
              balance scheduler utilization between schedulers. That is,
              strive for equal scheduler utilization on all schedulers. +sub true is only supported on systems where the runtime
              system detects and uses a monotonically increasing high-resolution
              clock. On other systems, the runtime system fails to start. +sub true implies 
              +scl false. The difference between
              +sub true and +scl false is that +scl false
              does not try to balance the scheduler utilization. 
- 
            +swct very_eager|eager|medium|lazy|very_lazy
- 
            Sets scheduler wake cleanup threshold. Defaults to medium.
              Controls how eager schedulers are to be requesting
              wakeup because of certain cleanup operations. When a lazy setting
              is used, more outstanding cleanup operations can be left undone
              while a scheduler is idling. When an eager setting is used,
              schedulers are more frequently woken, potentially increasing
              CPU-utilization. 
Note 
               This flag can be removed or changed at any time without prior
                notice. 
 
- +sws default|legacy
- 
            Sets scheduler wakeup strategy. Default strategy changed in
              ERTS 5.10 (Erlang/OTP R16A). This strategy was known as
              proposal in Erlang/OTP R15. The legacy strategy
              was used as default from R13 up to and including R15. 
Note 
               This flag can be removed or changed at any time without prior
                notice. 
 
- 
            +swt very_low|low|medium|high|very_high
- 
            Sets scheduler wakeup threshold. Defaults to medium.
              The threshold determines when to wake up sleeping schedulers
              when more work than can be handled by currently awake schedulers
              exists. A low threshold causes earlier wakeups, and a high
              threshold causes later wakeups. Early wakeups distribute work
              over multiple schedulers faster, but work does more easily bounce
              between schedulers. 
Note 
               This flag can be removed or changed at any time without prior
                notice. 
 
- 
            +swtdcpu very_low|low|medium|high|very_high
- 
            As +swt but
              affects dirty CPU schedulers. Defaults to medium. 
Note 
               This flag can be removed or changed at any time
                without prior notice. 
 
- 
            +swtdio very_low|low|medium|high|very_high
- 
            As +swt but affects
              dirty IO schedulers. Defaults to medium. 
Note 
               This flag can be removed or changed at any time
                without prior notice. 
 
 
- +t size
- 
        Sets the maximum number of atoms the virtual machine can handle.
          Defaults to 1,048,576. 
- +T Level
- 
        Enables modified timing and sets the modified timing level. Valid
          range is 0-9. The timing of the runtime system is changed. A high
          level usually means a greater change than a low level. Changing the
          timing can be very useful for finding timing-related bugs. Modified timing affects the following: 
          - Process spawning
- A process calling spawn,
            spawn_link, spawn_monitor,
            or spawn_opt is scheduled out immediately
            after completing the call. When higher modified timing levels are
            used, the caller also sleeps for a while after it is scheduled out.
          
- Context reductions
- The number of reductions a process is allowed to use before it
            is scheduled out is increased or reduced.
          
- Input reductions
- The number of reductions performed before checking I/O is
            increased or reduced.
          
 
Note 
           Performance suffers when modified timing is enabled. This flag is
            only intended for testing and debugging. return_to and return_from
            trace messages are lost when tracing on the spawn BIFs. This flag can be removed or changed at any time without prior
            notice. 
 
- +v
- 
        Verbose. 
- +V
- 
        Makes the emulator print its version number. 
- +W w | i | e
- 
        Sets the mapping of warning messages for
          error_logger. Messages sent to the error logger
          using one of the warning routines can be mapped to errors
          (+W e), warnings (+W w), or
          information reports (+W i). Defaults to warnings.
          The current mapping can be retrieved using
          error_logger:warning_map/0. For more information,
          see 
          error_logger:warning_map/0 in Kernel. 
- +zFlag Value
- 
        Miscellaneous flags: 
          - +zdbbl size
- 
            Sets the distribution buffer busy limit
              (dist_buf_busy_limit)
              in kilobytes. Valid range is 1-2097151. Defaults to 1024. A larger buffer limit allows processes to buffer
              more outgoing messages over the distribution. When the
              buffer limit has been reached, sending processes will be
              suspended until the buffer size has shrunk. The buffer
              limit is per distribution channel. A higher limit
              gives lower latency and higher throughput at the expense
              of higher memory use. 
- +zdntgc time
- 
            Sets the delayed node table garbage collection time
              (delayed_node_table_gc)
              in seconds. Valid values are either infinity or
              an integer in the range 0-100000000. Defaults to 60. Node table entries that are not referred linger
              in the table for at least the amount of time that this
              parameter determines. The lingering prevents repeated
              deletions and insertions in the tables from occurring. 
- +zosrl limit
- 
            
              Sets a limit on the amount of outstanding requests made by
              a system process orchestrating system wide changes. Valid
              range of this limit is [1, 134217727]. See
              
              erlang:system_flag(outstanding_system_requests_limit, Limit)
              for more information.