IntelメーカーAS/400 RISC Serverの使用説明書/サービス説明書
ページ先へ移動 of 368
IBM Power Systems Performance Capabilities Reference IBM i operating system Version 6.1 January/April/October 2008 This document is intended for use by qualified performance related programmers or analysts from IBM, IBM Business Partners and IBM customers using the IBM Power TM Systems platform running IBM i operating system.
Note! Before using this information, be sure to read the general information under “Special Notices.” Twenty Fifth Edition (January/April/October 2008 ) SC41-0607-13 This edition applies to IBM i operating System V6.1 running on IBM Power Systems.
Table of Contents 62 4.14 Performance References for DB2 ................................................. 61 4.13 Reuse Deleted Record Space ..................................................... 59 4.12 Variable Length Fields ........................
154 References for JDBC .......................................................... 153 JDBC Performance Tuning Tips .................................................. 153 10.1 DB2 for i5/OS access with JDBC ............................................
195 14.1.3 571B ................................................................. 193 14.1.2 iV5R2 Direct Attach DASD ................................................... 192 14.1.1 Hardware Characteristics .............................................
244 Data Compaction (COMPACT) ................................................... 244 Data Compression (DTACPR) .................................................... 244 Use Optimum Block Size (USEOPTBLK) ............................................ 244 15.
283 17.2.3 iSCSI virtual I/O private memory pool ........................................ 282 17.2.2 iSCSI Disk I/O Operations: ................................................. 281 17.2.1 IXS/IXA Disk I/O Operations: .................................
327 Chapter 21. High Availability Performance ........................................... 325 20.6 Aligning Floating Point Data on Power6 ........................................... 324 20.5 POWER6 520 Memory Considerations ...........................
368 C.18 AS/400 CISC Model Capacities ................................................. 367 C.17 AS/400 Models 4xx, 5xx and 6xx Systems ......................................... 366 C.16 AS/400e Custom Application Server Model SB1 ....................
Special Notices DISCLAIMER NOTICE Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. This information is presented along with general recommendations to assist the reader to have a better understanding of IBM(*) products.
The following terms, which may or may not be denoted by an asterisk (*) in this publication, are trademarks of the IBM Corporation. Power TM Systems Software Power TM Systems Software PowerPC POWER6+ .
Purpose of this Document The intent of this document is to help provide guidance in terms of IBM i operating system performance, capacity planning information, and tips to obtain optimal performance on IBM i operating system. This document is typically updated with each new release or more often if needed.
Chapter 1. Introduction IBM System i and IBM System p platforms unified the value of their servers into a single, powerful lineup of servers based on industry leading POWER6 processor technology with support for IBM i operating system (formerly known as i5/OS), IBM AIX and Linux for Power.
versions. The primary public performance information web site is found at: http://www.ibm.com/systems/i/advantages/perfmgmt/index.html IBM i 6.1 Performance Capabilities Reference - January/April/October 2008 © Copyright IBM Corp.
Chapter 2. iSeries and AS/400 RISC Server Model Performance Behavior 2.1 Overview iSeries and AS/400 servers are intended for use primarily in client/server or other non-interactive work environments such as batch, business intelligence, network computing etc.
interactive utilization - an average for the interval . Since average utilization does not indicate potential problems associated with peak activity, a second metric (SCIFTE) reports the amount of interactive utilization that occurred above threshold.
2.1.4 V5R2 and V5R1 There were several new iSeries 8xx and 270 server model additions in V5R1 and the i890 in V5R2. However, with the exception of the DSD models, the underlying server behavi or did not change from V4R5.
y The new server algorithm only applies to the new hardware available in V4R5 (2xx, 8xx and SBx models). The behavior of all other hardware, such as the 7xx models is unchanged (see section 2.
grows at a rate which can eventually eliminate server/batch capacity and limit additional interactive growth. It is best for interactive workloads to execute below (less than) the knee of the curve. (However, for those models having the knee at 1/3 of the total interactive capacity, satisfactory performance can be achieved.
2.3 Server Model Differences Server models were designed for a client/server workload and to accommodate an interactive workload. When the interactive workload exceeds an interactive CPW threshold (th.
0 6/7 Full Fraction of Interactive CPW 0 20 40 60 80 100 Available CPU available CFINT interactive Custom Server Model CPU Distribution vs. Interactive Utilization Knee Availabl e for Client/Serve r Applies to: AS /400e Cust om Servers, AS/400e Mixed Mode Servers Figure 2.
2.4 Performance Highlights of Model 7xx Servers 7xx models were designed to accommodate a mixture of traditional “green screen” applications and more intensive “server” environments. Interactive features may be upgraded if additional interactive capacity is required.
2.5 Performance Highlights of Model 170 Servers iSeries Dedicated Server for Domino models will be generally available on September 24, 1999. Please refer to Section 2.13, iSeries Dedicated Server for Domino Performance Behavior , for additional information.
The next chart shows the performance capacity of the curr ent and previous Model 170 servers . Previous vs. Curr ent AS/ 4 00e server 170 Per f ormance * Un constra ined V4R 2 rate s 73 114 210 319 31.
and higher than normal CFINT values. The goal is to avoi d exceeding the threshold (knee of the curve) value of interactive capacity. 2.8 Interactive Utilization When the interactive CPW utilization i.
Now if the interactive CPU is held to less than 4 % CPU (the knee), then the CPU available fo r the System, Batch, and Client/Server work is 100% - the Interactive CPU used .
If customers modify an IBM-supplied class description, they are responsible for ensuring the priority value is 35 or less after each new release or cumulative PTF package has been installed. One way to do this is to include the Change Class (CHGCLS) command in the system Start Up program.
Server Dynamic Tuning Recommendations On the new systems and mixed mode servers have the QDYNPTYSCD and QDYNPTYADJ system value set on. This preserves non-interactive capa cities and the interactive response times will be dynamic beyond the knee regardless of the setting.
2.10 Managing Interactive Capacity Interactive/Server characteristics in the real world. Graphs and formulas listed thus far work perfectly, provided the workload on the system is highly regu lar and steady in nature. Of course, ver y few systems have workloads like that.
There are other means for determining interactive utilization . The easiest of these is the performance monitoring function of Management Central, whic h became available with V4R3.
2. A similar effect can be found with index builds. If parallelism is enabled, index creat ion (CRTLF, Create Index, Open a file with MAINT(*REBUILD), or running a query that requires an index to be build) will be sent to service jobs that operate in non-interactive mode, but charge their work back to the job that requested the service.
2.11 Migration from Traditional Models This section describes a suggested methodology to determine which server model is appropriate to contain the interactive workload of a traditional model when a migration of a workload is occurring . It is assumed that the server model will have both interactive and client/server workloads.
*********************************************************************************** Component Report Component Interval Activity Data collected 190396 at 1030 Member . . . : Q960791030 Model/Serial . : 310-2043/10-0751D Main St ... Library. . : PFR System name.
one third of the total possible interactive workload, for non-custom models. The equation shown in this section will migrate a traditional system to a server system and keep the interactive workload at or below the knee of the curve, that is, using less than two thirds of the total possible interactive workload.
2.13 iSeries for Domino and Dedicated Server for Domino Performance Behavior In preparation for future Domino releases which will provides support for DB2 files, the previous processing limitations associated with DSD models have been removed in i5/OS V5R3.
Domino-Complementary Processing Prior to V5R1, processing that did not spend the majority of its time in Domino code was considered non-Domino processing and was limited to approximately 10-15% of the system capacity.
Similar to previous DSD performance behavior for interactive processing, the I nteractive CPW rating of 0 allows for system administrative functions to be performed by a single interactive user. In practice, a single interactive user will be able to perform necessary administrative functions without constraint.
processing present in the Linux logical partition, and all resources allocated to the Linux logical partition can essentially be used as though it were complementary processing.
Chapter 3. Batch Performance In a commercial environment, batch workloads tend to be I/O intensive rather than CPU intensive. The factors that affect batch throughput for a given batch application inc.
3.3 Tuning Parameters for Batch There are several system parameters that affect batch performance. The magnitude of the effect for each of them depends on the specific application and overall system characteristics. Some general information is provided here.
improve performance by eliminating disk I/O operations. y If communications lines are involved in the batch application, try to limit the number of communications I/Os by doing fewer (and perhaps larger) larger application sends and receives. Consider blocking data in the application.
Chapter 4. DB2 for i5/OS Performance This chapter provides a summary of the new performance features of DB2 for i5/ OS on V6R1, V5R4 and V5R3, along with V5R2 highlights. Summaries of selected key topics on the performance of DB2 for i5/OS are provided.
y DB2 Multisystem tables New function available in V6R1 whose use may affect SQL performance are derived key indexes, decimal floating point data type support, and the select from insert statement.
the statement is complete. The implementation to invoke the locking causes a physical DASD write to the journal for each record, which causes journal waits. Journal caching on allows the journal writes to accumulate in memory and have one DASD write per multiple journal entries, greatly reducing the journal wait time.
Table Expressions (RCTE) which allow for more elegant and better performing implementations of recursive processing. In addition, enhancements have been made in i5/OS V5R4 to the support for materialize query tables (MQTs) and partitioned table processing, which were both new in i5/OS V5R3.
Enhancements to extend the use of materialized query tables (MQTs) were added in i5/OS V5R4. New supported function in MQT queries by the MQT matching algorithm are unions and partitioned tables, along with limited support for scalar subselects, UDFs and user defined table functions, RCTE, and some scalar functions.
SQL queries which continue to be routed to CQE in i5/OS V5R3 have the following attributes: y Tables with select/omit logicals over them y References to DDS logical files y ALWCPYDTA(*NO) y LOB column.
Partitioned Table Support Table partitioning is a new feature introduced in i5/OS V5R3. The design is localized on an individual table basis rather than an entire library. The user specifies one or more fields which collectively act as a partitioning key.
y Statistical Strategies y SMP Considerations y Administration Examples (Adding a Partition, Dropping a Partition, etc.) Materialized Query Table Support The initial release of i5/OS V5R3 includes the.
more information may be used in the query plan costing phase than was available to the optimizer previously. The optimizer may now use newly implemented database statistics to make more accurate decisions when choosing the query access plan.
should be made to determine if the needed statistics are available. Also in environments where long running queries are run only one time, it may be beneficial to ensure that statistics are available prior to running the queries.
SQE for V5R2 Summary Enhancements to DB2 for i5/OS, called SQE, were made in V5R2 . The SQE enhancements are object oriented implementations of the SQE optimizer, the SQE query engine and the SQE database statistics. In V5R2 a subset of the read-only SQL queries will be optimized and run with the SQE enhancements.
4.6 DB2 Symmetric Multiprocessing feature Introduction The DB2 SMP feature provides application transparent support for parallel query operations on a single tightly-coupled multiprocessor System i (shared memory and disk).
limit the amount of data it brings into and keeps in memory to a job’s share of memory. The amount of memory available to each job is inversely proportional to the number of active jobs in a memory pool. The memory-sharing algorithms discussed above provide balanced performance for all the jobs running in a memory pool.
y Allows customers to replace current programming methods of capturing and transmitting journal entries between systems with more efficient system programming methods. This can result in lower CPU consumption and increased throughput on the source system.
There are 3 sets of tasks which do the SMAPP work. These tasks work in the background at low priority to minimize the impact of SMAPP on system performance. The tasks are as follows: y JO_EVALUATE-TASK - Evaluates indexes, estimates rebuild time for an index, and may start or stop implicit journaling of an index.
multiple nodes in the cluster, access to the database files is seamless and transparent to the applications and users that reference the database. To the users, the partitioned files still behave as though they were local to their system.
4.10 Referential Integrity In a database user environment, there are frequent cases where the data in one file is dependent upon the data in another file.
The following are performance tips to consider when using triggers support: y Triggers are activated by an external call. The user needs to weigh the benefit of the trigger against the cost of the external call. y If a trigger is going to be used, leave as much validation to the trigger program as possible.
To create the variable length field just described, use the following DB2 statement: CREATE TABLE library/table-name (field VARCHAR(50) ALLOCATE(20) NOT NULL) In this particular example the field was created with the NOT NULL option. The other two options are NULL and NOT NULL WITH DEFAULT.
01 DESCR. 49 DESCR-LEN PIC S9(4) COMP-4. 49 DESCRIPTION PIC X(40). EXEC SQL FETCH C1 INTO DESCR END-EXEC. For more detail about the vary-length character string, refer to the SQL Programmer's Guide. The above point is also true when using a high-level language to insert values into a variable length field.
In contrast, when reuse is active, the database support will process the added record more like an update operation than an add operation. The database support will maintain a bit map to keep track of deleted records and to provide fast access to them.
2. The System i information center section on DB2 for i5/OS under Database and file systems has information on all aspects of DB2 for i5/OS including the section Monitor and Tune database under Administrative topics . This can be found at url: http://www.
Chapter 5. Communications Performance There are many factors that affect System i performance in a communications environment. This chapter discusses some of the common factors and offers guidance on how to help achieve the best possible performance.
y IBM’s Host Ethernet Adapter (HEA) integrated 2-Port 10/100/1000 Based-TX PCI-E IOA supports checksum offloading, 9000-byte jumbo frames (1 Gigabit only) and LSO - Large Send Offload (IPv4 only). These adapters do not require an IOP to be installed in conjunction with the IOA.
Notes: 1. Unshielded Twisted Pair (UTP) card; uses copper wire cabling 2. Uses fiber optics 3. Custom Card Identification Number and System i Feature Code 4. Virtual Ethernet enables you to establish communication via TCP/IP between logical partitions and can be used without any additional hardware or software.
To demonstrate communications performance in various ways, several workload scenarios are analyzed. Each of these scenarios may be executed with regular nonsecure sockets or with secure SSL using the GSK API: 1. Request/Response (RR): The client and server send a specified amount of data back and forth over a connection that remains active.
75.0 10.4 3 Sessions 70.0 10.5 2 Sessions 42.0 10.8 1 Session 15 Disk Units ASP on 2757 IOA 1 Disk Unit ASP on 2757 IOA FTP Performance in MB per second Virtual Ethernet 5.4 TCP/IP non-secure performance In table 5.4 you will find the payload information for the different Ethernet types.
Notes : y Capacity metrics are provided for nonsecure transactions y The table data reflects System i as a server (not a client) y The data reflects Sockets and TCP/IP y This is only a rough indicator for capacity planning. Actual results may differ significantly.
Notes : y Capacity metrics are provided for nonsecure and each variation of security policy y The table data reflects System i as a server (not a client) y This is only a rough indicator for capacity planning. Actual results may differ significantly. y Each SSL connection was established with a 1024 bit RSA handshake.
Notes : y Capacity metrics are provided for nonsecure and each variation of security policy y The table data reflects System i as a server (not a client) y This is only a rough indicator for capacity planning. Actual results may differ significantly. y Each SSL connections was established with a 1024 bit RSA handshake.
Notes : y Capacity metrics are provided for nonsecure and each variation of security policy y The table data reflects System i as a server (not a client) y VPN measurements used transport mode, TDES, AES128 or RC4 with 128-bit key symmetric cipher and MD5 message digest with RSA public/private keys.
y For additional information regarding your Host Ethernet Adapter please see your specification manual and the Performance Management page for future white papers regarding iSeries and HEA. y 1 Gigabit Jumbo frame Ethernet enables 12% greater throughput compared to normal frame 1 Gigabit Ethernet.
only a few seconds may perform best. Setting this value too low may result in extra error handling impacting system capacity. y No single station can or is expected to use the full bandwidth of the LAN media. It offers up to the media's rated speed of aggregate capacity for the attached stations to share.
there is network congestion or overruns to certain target system adapters, then increasing the value from the default=*NONE to 2 or something larger may improve performance.
• FTS is a less efficient way to transfer data. However, it offers built in data compression for line speeds less than a given threshold. In some configurations, it will compress data when using LAN; this significantly slows down LAN transfers.
5.9 Additional Information Extensive information can be found at the System i Information Center web site at: http://www.ibm.com/eserver/iseries/infocenter .
Chapter 6. Web Server and WebSphere Performance This section discusses System i performance information in Web serving and WebSphere environments. Specific products that are discussed include: HTTP Server (powered by Apache) (in section 6.1), PHP - Zend Core for i (6.
Information source and disclaimer: The information in the sections that follow is based on performance measurements and analysis done in the internal IBM performance lab. The raw data is not provided here, but the highlights, general conclusions, and recommendations are included.
y CGI : HTTP invokes a CGI program which builds a simple HTML page and serves it via the HTTP server. This CGI program can run in either a new or a named activation group. The CGI programs were compiled using a "named" activation group unless specified otherwise.
Notes/Disclaimers : y Data assumes no access logging, no name server interactions, KeepAlive o n, LiveLocalCache off y Secure: 128-bit RC4 symmetric cipher and MD5 message digest with 1024-bit RSA pub.
Notes/Disclaimers : y Data assumes no access logging, no name server interactions, KeepAlive o n, LiveLocalCache off y Secure: 128-bit RC4 symmetric cipher and MD5 message digest with 1024-bit RSA pub.
Notes/Disclaimers : y These results are relative to each other and do not scale with other environments. y IBM System i CPU features without an L2 cache will have lower web server capacities than the CPW value would indicate 2.622 1.873 13.539 7.691 34.
a. V5R4 provides similar Web server performance compared with V5R3 for most transactions (with similar hardware). In V5R4 there are opportunities to exploit improved CGI performance. More information can be found in the FAQ section of the HTTP server website http://www.
variable overhead of encryption/decr yption, which is proportional to the number of bytes in the transaction. Note the capacity factors in the tables above comparing non-secure and secure serving. From Table 6.1, note that simple transactions (e.g., static page serving) , the impact of secure serving is around 20%.
11. HTTP and TCP/IP Configuration Tips: Information to assist with the configuration for TCP/IP and HTTP can be viewed at http://publib.boulder.ibm.com/infocenter/iseries/v5r4/index.
13. File System Considerations : Web serving performance varies significantly based on which file system is used. Each file system has different overheads and performance characteristics. Note that serving from the ROOT or QOPENSYS directories provide the best system capacity.
6.2 PHP - Zend Core for i This section discusses the different performance aspects of running PHP transaction based applications using Zend Core for i, including DB access considerations, utilization of RPG program call, and the benefits of using Zend Platform.
y Throughput - Orders Per Minute (OPM). Each order actually consists of 10 web requests to complete the order. y Order response time (RT) in milliseconds y Total CPU - Total system processor utilization y CPU Zend/AP - CPU for the Zend Core / Apache component.
Conclusions: 1. The performance of each DB connection interface provides exceptional response time at very high throughput. Each order processed consisted of ten web requests. As a result, the capacity ranges from about 650 transactions per second up to about 870 transactions per second.
Conclusions: 1. As stated earlier, persistent connections can dramatically improve overall performance. When using persistent connections for all transactions, the DB CPU utilization is significantly less than when using non-persistent connections. 2.
Conclusions: 1. In both cases above, the overall system capacity improved significantly when using Zend Platform, by about 15-35% for this workload. Wi th each order consisting of 10 web requests, proc essing 6795 orders per minute translates into abo ut 1132 tr ansactions per second.
6.3 WebSphere Application Server This section discusses System i performance information for the WebSphere Application Server, including WebSphere Application Server V6.1, WebSphere Application Server V6.0, WebSphere Application Server V5.0 and V5.1, and WebSphere Application Server Express V5.
because the improvements largely resulted from significant reductions in pathlength and CPU, environments that are constrained by other resources such as IO or memory may not show the same level of improvements seen here. Tuning changes in V6R1 As indicated above, most improvements will require no changes to an application.
For WebSphere 5.1 and earlier refer to the Performance Considerations guide at: www.ibm.com/servers/eserver/iseries/software/websphere/wsappserver/product/PerformanceConsideratio ns.html For WebSphere 5.1, 6.0 and 6.1 please refer to the following page and follow the appropriate link: w ww.
Trade 6 Benchmark ( IBM Trade Performance Benchmark Sample for WebSphere Application Server ) Description: Trade 6 is the fourth generation of the WebSphere end-to-end benchmark and performance sample application.
The Trade 6 application allows a user, typically using a Web browser, to perform the following actions: y Register to create a user profile, user ID/password and initial account balance y Login to val.
WebSphere Application Server V6.1 Historically, new releases of WebSphere Application Server have offered improved performance and functionality over prior releases of WebSphere . WebSphere Application Server V6.1 is no exception. Furthermore, the availability of WebSphere Application Server V6.
Trade3 Measurement Results: Figure 6.2 Trade Capacity R esults y Trade3 chart: WebSphere 5.0 was measured on both V5R2 a nd V5R3 on a 4 way (LPAR) 825/2473 system WebSphere 5.1 was measured on V5R3 on a 4 way (LPAR) 825/2473 system WebSphere 6.0 was measured on V5R3 on a 4 way (LPAR) 825/2473 system WebSphere 6.
Trade Scalability Results : Figure 6.3 Trade Scaling R esults y Trade 3 chart: V5R2 - 890/2488 32-Way 1.3 G Hz, V5R2 was measured with WebSphere 5.0 and WebSphere 5.1 V5R3 - 890/2488 32-Way 1.3 G Hz, V5R3 was measured with WebSphere 5.1 POWER5 chart: POWER4 - V5R3 825/2473 2-Way (LPAR) 1.
PingServlet2TwoPhase drives a Session EJB which invokes an Entity EJB with findByPrimaryKey (DB Access) followed by posting a message to an MDB through a JMS Queue (Message access). These operations are wrapped in a global 2-phase transaction and commit.
Figure 6.4 WebSphere Trade 3 primitive results. Note: The measurements were performed on the same machine, an 270-2434 600 MHz 2-Way. All results are for a non-secure environment. IBM i 6.1 Performance Capabilities Reference - January/April/October 2008 © Copyright IBM Corp.
Accelerator for System i Coinciding with the release of i5/OS V5R4, IBM introduces new entry IBM System i models. The models introduce accelerator technologies and/or L3 cache in order to improve options for clients in the low-end server space.
Figure 6.6 provides insight into response time information regarding low-end System i models. There are two key concepts that are displayed in the data in Figure 6.6. The first is that Accelerator for System i models can provide substantially better response times than previous models for a single or many users.
Performance Considerations When Using WebSphere Transaction Processing (XA) In a general sense, a transaction is the execution of a set of related operations that must be completed together. This set of operations is referred to as a unit-of-work. A transaction is said to commit when it completes successfully.
Restriction: You cannot benefit from the one-phase commit optimization in the following circumstances: y If your application uses a reliability attribute other than assured persistent for its JMS messages. y If your application uses B ean Managed Persistence (BMP) entity beans, or JDBC clients.
6.4 IBM WebFacing The IBM WebFacing tool converts your 5250 application DDS display files, menu source, and help files into Java Servlets, JSPs, JavaBeans, and JavaScript to allow your application to run in either WebSphere Application Server V5 or V4.
details on the number of I/O fields for each of these workloads. We ran the workloads on three separate machines (see table 6.5) to validate the performance characteristics with regard to CPW . In our running of the workloads, we tolerated only a 1.5 second server response time per panel.
• (Advanced Edition Only) Struts-compliant code generated by the WebFacing Tool conversion process which sets the foundation for extending your Webfaced applications using struts-compliant action architecture • Automatic configuration for UTF-8 support when you deploy to WebSphere Application Server version 5.
When set to an appropriate level for the Webfaced application, the Record Definition Cache can provide a decrease in memory usage, and slightly decreased processor usage. The number of record definitions that the cache will retain is set by an initialization parameter in the Webfaced application’s deployment descriptor (web.
To enable the servlet that will display the contents of the cache, first add the following segments to the Webfaced application’s web.xml. <servlet> <servlet-name> CacheDumper </servlet-name> <display-name> CacheDumper </display-name> <servlet-class> com.
Save a list of all the cached record data definitions. This list is saved in the RecordJSPs directory of the Webfaced application. The actual record definitions are not saved, just the list of what record definitions are cached. Once the cache is optimally tuned, this list can be used to preload the Record Definition cache.
Refer to the following table for the functionality provided by the Record Definition Loader servlet. This option will load the record definitions listed in a file in the RecordJSPs directory. Typically this file is created with the CacheDumper servlet previously described.
WebSphere Application Server . On System i servers, the recommended WebSphere application configuration is to run Apache as the web server and WebSphere Application Server as the application server. Therefore, it is recommended that you configure HTTP compression support in Apache.
You also need to add the directive: SetOutputFilter DEFLATE to the container to be compressed, or globally if the compression can always be done. There is documentation on the Apache web site on mod_deflate ( http://httpd.apache.org/docs-2.0/mod/mod_deflate.
PartnerWorld for Developers Webfacing website: http://www.ibm.com/ servers/enable/site/ebiz/webfacing/index.html IBM WebFacing Tool Performance Update - This white paper expains how to help optimize WebFaced Applications on IBM System i servers. Requests for the paper require user registration; there are no charges.
6.5 WebSphere Host Access Transformation Services (HATS) WebSphere Host Access Transformation Services (HATS) gives you all the tools you need to quickly and easily extend your legacy applications to business partners, customers, and employees.
customization requires development effort, while Default Rendering requires minimal development resources. Default: The screens in the application’s main path are unchanged. Moderate: An average of 30% of the screens have been customized. Advanced : All screens have been customized.
IBM Systems Workload Estimator for HATS The purpose of the IBM Systems Workload Estimator (WLE) is to provide a comprehensive System i sizing tool for new and existing customers interested in deploying new emerging workloads standalone or in combination with their current workloads.
requirements do not take into account the requirement for other web applications, such as customer applications. You should use IBM Systems Workload Estimator ( http://www-912.ibm.com/wle/EstimatorServlet ) to determine the system requirements for additional web applications.
6.7 WebSphere Portal The IBM WebSphere Portal suite of products enables companies to build a portal web site serving the individual needs of their empl oyees, business partners and customers. Users can sign on to the portal and view personalized web pages that provide access to the information, people and applications they need.
6.9 WebSphere Commerce Payments Use the IBM Systems Workload Estimator to predict the capacities and resource requirements for WebSphere Commerce Payments. The Estimator allows you to predict a standalone WCP environment or a WCP environment associated with the buy visits from a WebSphere Commerce estimation.
of access mechanisms. Please see the Connect for iSeries white paper located at the following URL for more information on Connect for iSeries. http://www-1.
1. Connector relative capacity: The different back-end connector types are meant to allow users a simple way to connect the Connect for iSeries product to their back-end application.
Chapter 7. Java Performance Highlights: y Introduction y What’s new in V6R1 y IBM Technology for Java (32-bit and 64-bit) y Classic VM (64-bit) y Determining Which JVM to Use y Capacity Planning y Tips and Techniques y Resources 7.
option for Java applications which require large amounts of memory. The Classic VM remains available in V6R1, but future i5/OS releases are expected to support only IBM Technology for Java. The default VM in V6R1 is IBM Technology for Java 5.0, 32-bit.
On i5/OS, IBM Technology for Java runs in i5/OS Portable Application Solutions Environment (i5/OS PASE) with either a 32-bit (for the 32-bit VM) or 64-bit (for the 64-bit VM) environment.
Fortunately, it is not too difficult to come up with parameter values which will provide good performance. If you are moving an application from the Classic VM to IBM Technology for Java , you can use a tool like DMPJVM or verbose GC to determine how large the heap grows when running your application.
performance, it pays to apply analysis and optimizations to the Java bytecodes, and the resulting machine code. One approach to optimizing Java bytecode involves analyzing the object code “ahead of time” – before it is actually running.
applications with a large number of classes. Running CRTJVAPGM with OPTIMIZE(*INTERPRET) will create this program ahead of time, making the first startup faster. Garbage Collection Java uses Garbage Collection (GC) to automatically manage memory by cleaning up objects and memory when they are no longer in use.
display; rates of 20 to 30 faults per second are usually acceptable, but larger values may indicate a performance problem. In this case, the size of the memory pool should be increased, or the collection threshold value (GCHINL or -Xms) should be decreased so the heap isn’t allowed to grow as large.
later releases the cache is enabled and the maxpgms set to 20000 by default, so no adjustment is usually necessary. The verification cache operates by caching JVAPGMs that have been dynamically created for dynamically loaded classes.
libraries and environments may require a particular version. The Classic VM continues to support JDK 1.3, 1.4, 1.5 (5.0), and 1.6 (6.0) in V5R4, and JDK 1.4, 1.5 (5.0), and 1.6 (6.0) in V6R1. 3. The Classic VM supported an i5/OS-specific feature called Adopted Authority.
application itself or a reasonably complete subset of the application, using a load generating tool to simulate a load representative of your planned deployment environment.
y Beware of misleading benchmarks. Many benchmarks are available to test Java performance, but most of these are not good predictors of server-side Java performance. Some of these benchmarks are single-threaded, or run for a very short period of time.
4. Database Specific. Use of database can invoke significant path length in i5/OS. Invoking it efficiently can maximize the performance and value of a Java application.
does take advantage of programs created at optimization *INTERPRET. These programs require significantly less space and do not need to be deleted. Program objects (even at *INTERPRET) are not used by IBM Technology for Java. y Consider the special property os400.
y The I/O method readLine ( ) (e.g. in java.io.BufferedReader) will create a new String. y String concatenation (e.g.: “The value is: “ + value) will generally result in creation of a StringBuffer, a String, and a character array.
int i = 0; try { while (true) { System.out.println (arr[i++]); } } catch (ArrayOutOfBoundsException e) { // Reached the end of the array....exit } } Instead, the above procedure should be written as: public void goodPrintArray (int arr[]) { int len = arr.
applications. The Toolbox driver supports remote access, and should be used when accessing the database on a separate system. This recommendation is true for both the 64-bit Classic VM and the new 32-bit VM. y Pool Database Connections Connection pooling is a technique for sharing a small number of database connections among a number of threads.
Resources The i5/OS Java and WebSphere performance team maintains a list of performance-related documents at http://www.ibm.com/systems/i/solutions/perfmgmt/webjtune.html . The Java Diagnostics Guide provides detailed information on performance tuning and analysis when using IBM Technology for Java.
Chapter 8. Cryptography Performance With an increasing demand for security in today’s information society, cryptography enables us to encrypt the communication and storage of secret or confidential data. This also requires data integrity, authentication and transaction non-repudiation.
CSP API Sets User applications can utilize cryptographic services indirectly via i5/OS functions (SSL/TLS, VPN IPSec) or directly via the following APIs: y The Common Cryptographic Architecture (CCA) API set is provided for running cryptographic operations on a Cryptographic Coprocessor.
8.3 Software Cryptographic API Performance This section provides performance information for System i systems using the following cryptograp hic services; i5/OS Cryptographic Services API and IBM JCE 1.
Notes: y Transaction Length set at 1024 bytes y See section 8.2 for Test Environment Information 35 163 2048 10 SHA-1 / RSA 30 129 2048 1 SHA-1 / RSA 240 1,155 1024 10 SHA-1 / RSA 197 901 1024 1 SHA-1 / RSA JCE (Transactions/Second) i5/OS (Transactions/Second) RSA Key Length (Bits) Threads Encryption Algorithm Signing Performance Table 8.
which is designed to meet FIPS 140 -2 Level 4 security requirements. This new cryptographic card offers the security and performance required to support e-Business and emerging digital signature applications.
Notes: y Transaction Length set at 1024 bytes y See section 8.2 for Test Environment information 465 2048 10 SHA-1 / RSA 308 2048 1 SHA-1 / RSA 1,074 1024 10 SHA-1 / RSA 794 1024 1 SHA-1 / RSA 4764 (Transactions/second) RSA Key Length (Bits) Threads Encryption Algorithm Signing Performance CCA CSP Table 8.
y Supported number of 4764 Cryptographic Coprocessors: 8 8 IBM System i5 520, 550, 570 2/4W 8 32 IBM System i5 570 8/12/16W, 595 Maximum per partition Maximum per server server models Table 8.
Chapter 9. iSeries NetServer File Serving Performance This chapter will focus on iSeries NetServer File Serving Performance. 9.1 iSeries NetServer File Serving Performance iSeries Support for Windows .
Measurement Results : Conclusion/Explanations : IBM i 6.1 Performance Capabilities Reference - January/April/October 2008 © Copyright IBM Corp. 2008 Chapter 9 - iSeries NetServer File Serving 150 environment can be obtained by sending an email to llhirsch@us.
From the charts above in the Measurement Results section, it is evident that when customers upgrade to V5R4 they can expect to se e an improvement in throughput and response time when using iSeries NetServer. IBM i 6.1 Performance Capabilities Reference - January/April/October 2008 © Copyright IBM Corp.
Chapter 10. DB2 for i5/OS JDBC and ODBC Performance DB2 for i5/OS can be accessed through many different interfaces. Among these interfaces are: Windows .NET, OLE DB, Windows database APIs, ODBC and JDBC. This chapter will focus on access through JDBC and ODBC by providing programming and tuning hints as well as links to detailed information .
y Use the lowest isolation level required by the application. Higher isolation levels can reduce performance levels as more locking and synchronization are required.
y Employ efficient SQL programming techniq ues to minimize the amount of data processed y Prepared statement reuse to minimize parsing and optimization overhead for frequently run queries y Use stored.
Packages may be shared by several clients to reduce the number of packages on the System i server. To enable sharing, the default libraries of the clients must be the same and the clients must be running the same application.
‘All libraries on the system’ will cause all libraries on the system to be used for catalog requests and may cause significant degradation in response times due to the potential volume of libraries to process.
Chapter 11. Domino on i This chapter includes performance information for Lotus Domino on the IBM i operating system. Some of the information previously included in this section has been removed. Earlier versions of the document can be accessed at http://www.
y IBM Lotus Domino V8 server with the IBM Lotus Notes V8 client: Performance , October 2007 http://www.ibm.com/developerworks/lotus/library/domino 8-performance/index.html y Lotus Domino 7 Server Performance, Part 2, November 2005 http://www.ibm.com/developerworks/lotus/library/domino7-internet-performance/index.
Delete documents marked for deletion Create 1 appointment (every 90 minutes) Schedule 1 meeting invitation ( every 90 minutes) Close the view y Domino Web Access (formerly known as iNo.
optimal performance but of course without the function provided in the Domino 7 templates. The following links refer to these articles: y Lotus Domino 7 Server Performance, Part 1, September 2005 http://www.ibm.com/developerworks/lotus/library/nd7-perform/index.
<1% 72ms 51.5% 20,000 Domino 6 <1% >5sec 96.2% 20,000 Domino 5.0.11 <1% 65ms 11.0% 3,800 Domino 6 <1% 119ms 19.4% 3,800 Domino 5.0.11 <1% 64ms 24.
The 2000 user comparison was done on a model i825-2473 with 6 1.1GHz POWER4 processors, 45GB of memory, and 60 18GB disk drives configured with RAID5, in a single Domino partition. The 3800 user comparison used a single Domino partition on a model i890-0198 with 32 1.
shopping application , but would provide even better response times than the 270-2423 as projected in Figure 11.3. When using MHz alone to compare performance capabilities between models, it is necessary for those models to have the same processor technology and configuration.
The eServer i5 Domino Edition builds on the tradition of the DSD (Dedicated Server for Domino) and the iSeries for Domino offering - providing great price/performance for Lotus software on System i5 and i5/OS. Please visit the following sites for the latest information on Domino Edition solutions: y http://www.
that the larger the buffer pool size, the higher the fault rate, but the lower the cpu cost. If the faulting rate looks high, decrease the buffer pool size. If the faulting rate is low but your cpu utilization is high, try increasing the buffer pool size.
7. Full text indexes Consider whether to allow users to create full text indexes for their mail files , and avoid the use of them whenever possible. These indexes are expensive to maintain since they take up CPU processing time and disk space. 8. Replication.
11.8 Domino Web Access The following recommendations help optimize your Domino Web Access environment: 1. Refer to the redbooks listed at the beginning of this chapter. The redbook, “iNotes Web Access on the IBM eServer iSeries server,” contains performance information on Domino Web Access including the impact of running with SSL.
11.10 Performance Monitoring Statistics Function to monitor performance statistics was added to Domino Release 5.0.3. Domino will track performance metrics of the operating system and output the results to the server. Type "show stat platform" at the server console to display them.
2. *MINIMIZE The main storage will be allocated to minimize the space used by the object. That is, as little main storage as possible will be allocated and used. This minimizes main storage usage while increasing the number of disk I/O operations since less information is cached in main storage.
The following is an example of how to issue the command: CHGATR OBJ( name of object) ATR(*MAINSTGOPT) VALUE(*NORMAL, *MINIMIZE , or *DYNAMIC) The chart below depicts V5R3-based paging curve measurements performed with the following settings for the mail databases: *NORMAL, *MINIMIZE, and *DYNAMIC.
During the tests, the *DYNAMIC and *MINIMIZE settings used up to 5% more CPU resource than *NORMAL. Figure 11.5 below shows the response time data rather than fault rates for the same test shown in Figure 11.4 for the attributes *NORMAL, *DYNAMIC, and *MINIMIZE.
NOTE: MCU ratings should NOT be used directly as a sizing guideline for the number of supported users. MCU ratings provide a relative comparison metric which enables System i models to be compared with each other based on their Domino processing capability .
users or relatively low transaction rates, response times may be significa ntly higher for a small LPAR (such as 0.2 processor) or partial processor model as compared to a full processor allocation of the same technology. The IBM Systems Workload Esti mator will not recommend the 500 CPW or 600 CPW models for Domino processing.
Chapter 12. WebSphere MQ for iSeries 12.1 Introduction The WebSphere MQ for iSeries product allows application programs to communicate with each other using messages and message queuing. The applications can reside either on the same machine or on different machines or platforms that are separated by one or more networks.
enhancement should allow customers to run with smaller, more manageable, receivers with less concern about the checkpoint taken following a receiver roll-over during business hours.
applications using MQ Series are running, you may need to consider adding memory to these pools to help performance. y Nonpersistent messages use significantly less CPU and IO resource than persistent messages do because persistent messages use native journaling support on the iSeries to ensure that messages are recoverable.
Chapter 13. Linux on iSeries Performance 13.1 Summary Linux on iSeries expand s the iSeries platform solutions portfolio by allowing customers and software vendors to port existing Linux applications to the iSeries with minimal effort.
y Shared Processors. This variation of LPAR allows the Hypervisor to use a given processor in multiple partitions. Thus, a uni-processor might be divided in various fractions between (say) three LPAR partitions. A four way SMP might give 3.9 CPUs to one partition and 0.
iSeries Linux is a program-execution environment on the iSeries system that provides a traditional memory model (not single-level store) and allows direct access to machine instructions (without the mapping of MI architecture).
13.4 Basic Configuration and Performance Questions Since, by definition, iSeries Linux means at least two independent partitions, questions of configuration and performance get surprisingly complicated, at least in the sense that not everything is on one operating system and whose overall performance is not visible to a single set of tools.
13.5 General Performance Information and Results A limited number of performance related tests have been conducted to date, comparing the performance of iSeries Linux to other env ironments on iSeries and to compare performance to similarly configured (especially CPU MHz) pSeries running the application in an AIX en vironment.
Linux ILE PASE Computational Environment 0 0.2 0.4 0.6 0.8 1 1.2 Relative Performance (Bigger Better) Integer Floating Point Fraction of ILE Performance One virtue of the i870, i890, and i825 machines is that the hardware floating point unit can make up for some of the code generation deficit due to its superior hardware scheduling capabilities.
Here, a model 840 was subdivided into the partition sizes shown and a typical web serving load was used. A "hit" is one web page or one image. The kttpd is a kernel-based daemon available on Linux which serves only static web pages or images.
As noted above, many distributions are based on the 2.95 gcc compiler. The more recent 3.2 gcc is also used by some distributions. Results there shows some variability and not much net improvement. To the extent it improves, the gap with ILE should close somewhat.
y Cost. Because the disk is virtual, it can be created to any size desired. For some kinds of Linux partitions, a single modern physical disk is overkill -- providing far more data than required. These requirements only increase if RAID, in particular, is specified.
typically recommended because it allows the Linux partitions to leverage the storage subsystem the customer has in the OS/400 hosting partition. 2. As the application gains in complexity, it is probably less likely that the application should switch from one product to the other.
do so, you may wish to compare with the next previous version. This would be especially important if you have one key piece of open source code largely responsible for the performance of a given partition. There is no way of ensuring that a new distribution is actually faster than the predecessor except to test it out.
substantial amount of Virtual I/O. This is probably on the high side, but can be important to have something left over. If the hosting partition uses all its CPU, Virtual I/O may slow substantially. y Use Virtual LAN for connections between iSeries partitions whether OS/400 or Linux.
Native and Virtual LAN (e.g. from outside the box on Native LAN, through the partition with the Native LAN, and then moving to a second partition via Virtual LAN then to another). IBM i 6.1 Performance Capabilities Reference - January/April/October 2008 © Copyright IBM Corp.
Chapter 14. DASD Performance This chapter discusses DASD subsystems available for the System i platform. There are two separate considerations. Before IBM i operating system V6R1, one only had to consider particular devices, IOAs, IOPs, and SAN devices.
14.1.0 Direct Attach (Native) 14.1.1 Hardware Characteristics 14.1.1.1 Devices & Controllers N/A N/A N/A 2 4.0 3.5 15K 280 433D N/A N/A N/A 2 4.0 3.5 15K 140 433C N/A N/A N/A 2 4.0 3.5 15K 70 433B 320 Not Supported Not Supported 2 4.0 3.6 15K 280 4329 320 160 Not Supported 2 4.
14.1.2 iV5R2 Direct Attach DASD This section discusses the direct attach DASD subsystem performance improvements that were new with the iV5R2 release. These consist of the following new hardware and s.
14.1.2.2 250 165 82 Restore 250 165 82 Save *SAVF 2757 IOA 122 83 41 Restore 122 83 41 Save *SAVF 45 Units 30 Units 15 Units 2778 IOA Number of 35 GB DASD units (Measurement numbers in GB/HR) IOA and operation This restrictive test is intended to show the effect of the 2757 IOAs in a backup and recovery environment.
14.1.3 571B iV5R4 offers two new options on DASD configuration. y RAID6 which offers improved system protection on supported IOAs. y NOTE: RAID6 is supported under iV5R3 but we have chosen to look at performance data on a iV5R4 system. y IOPLess operation on supported IOAs.
14.1.4 571B, 5709, 573D, 5703, 2780 IOA Comparison Chart In the following two charts we are modeling a System i 520 with a 573D IOA using RAID5, comparing 3 70GB 15K RPM DASD to 4 70GB 15K RPM DASD. The 520 is capable of holding up to 8 DASD but many of our smaller customers do not need the storage.
The charts below are an attempt to allow the different IOAs available to be compared on a single chart. An I/O Intensive Workload was used for our throughput measurements. The system used was a 520 model with a single 5094 attached which contained the IOAs for the measurements.
14.1.5 Comparing Current 2780/574F with the new 571E/574F and 571F/575B NOTE: iV5R3 has support for the features in this section but all of our performance measurements were done on iV5R4 systems. For information on the supported features see the IBM Product Announcement Letters.
14.1.6 Comparing 571E/574F and 571F/575B IOP and IOPLess In comparing IOP and IOPLess runs we did not see any significant differences, including the system CPU used. The system we used was a model 570 4 way, on the IOP run the system CPU was 11.6% and on the IOPLess run the system CPU was 11.
14.1.7 Comparing 571E/574F and 571F/575B RAID5 and RAID6 and Mirroring System i protection information can be found at http://www.redbooks.ibm.com/ in the current System i Handbook or the Info Center http://publib.
In comparing Mirroring and RAID one of the concerns is capacity differences and the hardware needed. We tried to create an environment where the capacity was the same in both environments. To do this we built the same size database on “15 35GB DASD using RAID5” and “14 70GB DASD using Mirroring spread across 2 IOAs”.
14.1.8 Performance Limits on the 571F/575B In the following charts we try to characteri ze the 571F/575B in different DASD configuration. The 15 DASD experiment is used to give a comparison point with DASD experiments from chart 14.1.5.1 and 14.1.5.2.
14.1.9 Investigating 571E/574F and 571F/575B IOA, Bus and HSL limitations. With the new DASD controllers and IOPLess capabilities, IBM has created many new options for our customers. Customers who needed more storage in their smaller configurations can now grow.
14.1.9.1 14.1.9.2 IBM i 6.1 Performance Capabilities Reference - January/April/October 2008 © Copyright IBM Corp. 2008 Chapter 14 DASD Performance 203 La r g e Blo c k RE ADs o n a Si ngle 5 0 9 4 To.
14.1.10 Direct Attach 571E/574F and 571F/575B Observations We did some simple comparison measurements to provide graphical examples for customers to observe characteristics of new hardware.
14.2 New in iV5R4M5 14.2.1 9406-MMA CEC vs 9406-570 CEC DASD IBM i 6.1 Performance Capabilities Reference - January/April/October 2008 © Copyright IBM Corp.
14.2.2 RAID Hot Spare For the following test, the IO workload was setup to run for 14 hours. About 5 hours after starting A DASD was pulled from the configurations. This forced a RAID set rebuild. IBM i 6.1 Performance Capabilities Reference - January/April/October 2008 © Copyright IBM Corp.
14.2.3 12X Loop Testing A 9406-MMA 8 Way system with 96 GB of mainstore and 396 DASD in #5786 EXP24 Disk Drawer on 3 12X loops for the system ASP were used, ASP 2 was created on a 4th 12X loop by adding 5796 system expansion units with 571F IOAs attaching 36 4327 70 GB DASD in # 5786 EXP24 Disk Drawer with RAID5 turned on.
14.3 New in iV6R1M0 14.3.1 Encrypted ASP More CPU and memory may be needed to achieve the same performance once encryption is enabled. IBM i 6.1 Performance Capabilities Reference - January/April/October 2008 © Copyright IBM Corp. 2008 Chapter 14 DASD Performance 208 Non Encryp te d AS P vs Encryp te d AS P 0 0.
IBM i 6.1 Performance Capabilities Reference - January/April/October 2008 © Copyright IBM Corp. 2008 Chapter 14 DASD Performance 209 N on E ncrypt ed A SP vs E ncrypt ed A SP 0 5 10 15 20 25 6000 730.
14.3.2 57B8/57B7 IOA With the addition of the POWER6 520 and 550 systems comes the new 57B8/57B7 SAS Raid Ennoblement Controller with Auxiliary Write Cache. This controller is only available in the POWER6 520 and 550 systems and provides RAID5/6 capabilities, with 175MB redundant write cache.
The POWER6 520 and 550 also have an external SAS port, that is controlled by the 57B8/57B7, used to connect a single #5886 - EXP 12S SAS Disk Drawer which can contain up to 12 SAS DASD. Below is a chart showing the addition of the #5886 - EXP 12S SAS Disk Drawer.
14.3.3 572A IOA The 572A IOA is a SAS IOA that is mainly used for SAS tape attachment but the 5886 EXP 12S SAS Disk Drawer can also be attached. Performance will be poor as the IOA does not have any cache. The following charts help to show the performance characteristics that resulted during experiments in the Rochester lab.
IBM i 6.1 Performance Capabilities Reference - January/April/October 2008 © Copyright IBM Corp. 2008 Chapter 14 DASD Performance 213.
14.4 SAN - Storage Area Network (External) There are many factors to consider when looking at external storage options, you can get more information through your IBM representative and the white papers that are available at the following location. https://www-304.
14.5 iV6R1M0 -- VIOS and IVM Considerations Beginning in iV6R1M0, IBM i operating system will participate in a new virtualization strategy by becoming a client of the VIOS product.
14.5.1 General VIOS Considerations 14.5.1.1 Generic Concepts 520 versus 512 . Long time IBM i operating system users know that IBM i operating system disks are traditionally configured with 520 byte sectors. The extra eight bytes beyond the 512 used for data are used for various purposes by Single Level Store.
14.5.1.2 Generic Configuration Concepts There are several important principles to keep track of in terms of getting good performance. Most of the following are issues when the disks are configured. A great many problems can be eliminated (or, created) when the drives are originally configured.
3. Prefer external disks attached directly to IBM i operating system over those attached via VIOS This is basically a statement of the Fibre Channel adapter and who owns it.
8. Ensure, within reason, a reasonable number of virtual disks are created and made available to IBM i operating system. One is tempted to simply lump all the storage one has in a virtual environment into a couple (or even one) large virtual disk. Avoid this if at all possible.
14.5.1.3 Specific VIOS Configuration Recommendations -- Traditional (non-blade) Machines 1. Avoid volume groups if possible . VIOS "hdisks" must have a volume identifier (PVID). Creating a volume group is an easy way to assign one and some literature will lead you to do it that way.
3. Limited number of virtual devices per virtual SCSI adapter. You will have to configure some number of virtual SCSI adapters so that VIOS can provide a path for IBM i operating system to talk to VIOS as if these were really physical SCSI devices.
14.5.1.3 VIOS and JS12 Express and JS22 Express Considerations Most of our work consisted of measurements with the JS22 offering and external disks using the DS4800 product. The following are results obtained in various measurements and then a few general comments about configuration will follow.
The chart above shows some basic performance scaling for 1, 2, 3 and 4 processors. For this comparison both partition measurements were done with the processors set up as shared, and with the IBM i operating system partition set to capped.
The following charts are a view of the chara cteristics we observed during our Commercial Performance Workload testing on our JS22 Express. The first chart shows the effect on the Commercial Performance Workload when we apply 3 Dedicated processors and then switch to 3 shared processors.
In following single partition Commercial Performance Workload runs the average VIOS CPU stayed under 40%. So we seem to have VIOS resource available but in a lot of customer environments communications and other resources are also running and these resources will also be routed through VIOS.
The following chart shows two IBM i operating system partitions using 14GB of memory and 1.7 processors each served by 1 VIOS partition using 2GB of memory and .6 processors. The Commercial Performance Workload was running the same amount of transactions on each of the partitions for the same time intervals.
14.5.1.3.2 BladeCenter S and JS12 Express The IBM i operating system is now supported on a JS12 Express in a BladeCenter S. The system is limited to 12 SAS DASD and the following charts try to characterize the performance we achieved during experiments with the Commercial Performance Workload in the IBM lab.
14.5.1.3.3 JS12 Express and JS22 Express Configuration Considerations 1. The aggregate total of virtual disks (LUNs) will be sixteen at most. Many customers will want to deploy between 12 and 16 LUNs and maximize symmetry. Consult carefully with your support team on the choices here.
14.5.1.3.4 DS3000/DS4000 Storage Subsystem Performance Tips Physical disks can be configured various ways with RAID levels, number of disks in each array and number of LUNs created over those arrays. There are also various reasons for the configurations that are chosen.
IBM i 6.1 Performance Capabilities Reference - January/April/October 2008 © Copyright IBM Corp. 2008 Chapter 14 DASD Performance 230 Blade Cen t er H w it h a JS 22 4 W ay Comm er cial Per for mance W or k load 0.
14.6 IBM i operating system 5.4 Virtual SCSI Performance The primary goal of virtualization is to lower the total cost of ownership of equipment by improving utilization of the overall system resources and reducing the labor requirements to operate and manage many servers.
In the test results that follow, we see the CPU required for IBM i operating system Virtual SCSI server and the benefits of the IBM i operating system Virtual SCSI implementation should be assessed for a given environment. Simultaneous multithreading should be enabled in a virtual hosted disk environment.
14.6.1 Introduction In general, applications are functionally isolated from the exact nature of their storage subsystems by the operating system. An application does not have to be aware of whether its storage is contained on one type of disk or another when performing I/O.
All measurements were completed on a POWER5 570+ 4-Way (2.2 GHz). Each system is configured as an LPAR, and each virtual SCSI test was performed between two partitions on the same system with one CPU for each partition. IBM i operating system 5.4 was used on the virtual SCSI server and AIX 5.
14.6.2.1 Native vs. Virtual Performance Figure 1 shows a comparison of measured bandwidth using virtual SCSI and local attached DASD for reads with varying block sizes of operations. The difference in the reads between virtual I/O and native I/O in these tests is attributable to the increased latency using virtual I/O.
14.6.2.3 Virtual SCSI Bandwidth-Network Storage Description (NWSD) Scaling Figure 3 shows a comparison of measured bandwidth while scaling network storage descriptions with varying block sizes of operations. Each of the network storage descriptions have a single network storage space attached to them.
14.6.2.4 Virtual SCSI Bandwidth-Disk Scaling Figure 4 shows a comparison of measured bandwidth while scaling disk drives with varying block sizes of operations. Each of the network storage descriptions have a single network storage space attached to them.
14.6.3 Sizing Sizing methodology is based on the observation that processor time required to perform an I/O on the IBM i operating system Virtual SCSI server is fairly constant for a given I/O size. The I/O devices supported by the Virtual SCSI server are sufficiently similar to provide good recommendations.
To calculate IBM i operating system Virtual SCSI CPU requirements the following formula is provided. The number of transactions per second could be collected by the IBM i operating system command WRKDSKSTS. Based on the average transaction size in WRKDSKSTS, select a number from the table.
14.6.3.2 Sizing when using Micro-Partitioning Defining Virtual SCSI servers in micro-partitions enables much better granularity of processor resource sizing and potential recovery of unused processor time by uncapped partitions.
14.6.3.3 Sizing memory The IBM i operating system Virtual SCSI server supports data read caching on the virtual hosted disk server partition. Thus all I/Os that it services could benefit from effects of caching heavily used data. Read performance can vary depending upon the amount of memory which is assigned to the server partition.
14.6.4 AIX Virtual IO Client Performance Guide The following is a link which will direct you to more in-depth performance tuning for AIX virtual SCSI client. Advanced POWER Virtualization on IBM p5 Servers: Architecture and Performance Considerations http://www.
Chapter 15. Save/Restore Performance This chapter’s focus is on the IBM i operating system platform. For legacy system models, older device attachment cards, and the lower performing backup devices see the V 5R3 performance capabilities reference. Many factors influence the observable performance of save and restore operations.
15.2 Save Command Parameters that Affect Performance Use Optimum Block Size (USEOPTBLK) The USEOPTBLK parameter is used to send a larger block of data to backup devices that can take advantage of the larger block size. Every block of data that is sent has a certain amount of overhead that goes with it.
15.3 Workloads The following workloads were designed to help evaluate the performance of single, concurrent and parallel save and restore operations for selected devices . Familiarization with these workloads can help in understanding differences in the save and restore rates .
15.4 Comparing Performance Data When comparing the performance data in this document with the actual performance on your system, remember that the performance of save and restore operations is data dependent. If the same backup device was used on data from three different systems, three different rates may result.
15.5 Lower Performing Backup Devices With the lower performing backup devices, the devices themselves become the gating factor so the save rates are approximately the same, regardless of system CPU size (DVD-RAM).
15.8 The Use of Multiple Backup Devices Concurrent Saves and Restores - The ability to save or restore different objects from a single library/directory to multiple backup devices or different libraries /directories to multiple backup devices at the same time from different jobs .
15.9 Parallel and Concurrent Library Measurements This section discusses parallel and concurrent library measurements for tape drives , while sections later in this chapter discuss measurements for virtual tape drives. 15.9.1 Hardware (2757 IOAs, 2844 IOPs, 15K RPM DASD) Hardware Environment.
15.9.2 Large File Concurrent For the concurrent testing 16 libraries were built, each containing a single 320 GB file with 80 4 GB members. The file size was chosen to sustain a flow across the HSL, system bus, processors, memory and tapes drives for about an hour.
15.9.3 Large File Parallel For the measurements in this environment, BRMS was used to manage the save and restore, taking advantage of the ability built into BRMS to split an object between multiple tape drives. Starting with a 320 GB file in a single library and building it up to 2.
15.9.4 User Mix Concurrent User Mix will generally portray a fair population of customer systems, where the real data is a mixture of programs, menus, commands along with their database files.
15.10 Number of Processors Affect Performance With the Large Database File workload , it is possible to fully feed two backup devices with a single processor, but with the User Mix workload it takes 1+ processors to fully feed a backup device. A recommendation might be 1 and 1/3 processors for each backup device you want to feed with User Mix data.
15.11 DASD and Backup Devices Sharing a Tower The system architecture does not require that DASD and backup devices be kept separated. Testing in the IBM Rochester Lab, we had attached one backup device to each tower and all towers had 45 DASD units in them, when we did the 3580 002 testing .
15.12 Virtual Tape Virtual tape drives are being introduced in iV5R4 so those customers can make use of the sp eed of saving to DASD, then save the data using DUPTAP to the tape drives reducing the b ackup window where the system is unavailable to users.
The following measurements were done on a system with newer hardware including a 3580 Ultrium 3 4Gb Fiber Channel Tape Drive, 571E sto rage adapters, and 4327 70GB (U320) DASD. Measurements were also done comparing save of 1000 empty libraries to tape versus save of these libraries to virtual tape followed by DUPTAP from the virtual tape to tape.
15.13 Parallel Virtual Tapes NOTE: Virtual tape is reading and writing to the same DASD so the maximum throughput with our concurrent and parallel measurements is different than our tape drive tests where we were reading from DASD and writing to tape.
15.14 Concurrent Virtual Tapes NOTE: Virtual tape is reading and writing to the same DASD so the maximum throughput with our concurrent and parallel measurements is different than our tape drive tests where we were reading from DASD and writing to tape.
15.15 Save and Restore Scaling using a Virtual Tape Drive. A 570 8 way System i was used for the following tests. A user ASP was created using up to 3 571F IOAs with up to 36 U320 70 GB DASD on each IOA. T he Chart shows the number of DASD in each test and the Virtual tape drive was created using that DASD.
15.16 Save and Restore Scaling using 571E IOAs and U320 15K DASD units to a 3580 Ultrium 3 Tape Drive. A 570 8 way System i was used for the following tests. A user ASP was created with the number of DASD listed in each test . The workload data was then saved to the tape drive , deleted from the system and restored to the user ASP.
IBM i 6.1 Performance Capabilities Reference - January/April/October 2008 © Copyright IBM Corp. 2008 Chapter 15. Save/Restore Performance 261 U ser M i x Saves 0 50 100 150 200 250 300 350 6 D A S D .
15.17 High-End Tape Placement on System i The current high-end tape drives (ULTRIUM-2 / ULTRIUM-3 and 3592-J / 3592-E) need to be placed carefully on the System i buses and HSLs in order to avoid bottlenecking.
15.18 BRMS-Based Save/Restore Software Encryption and DASD-Based ASP Encryption The Ultrium-3 was used in the following experiments, which attempt to characterize the effects of BRMS-based save /restore software encryption and DASD-based ASP encryption.
Performance will be limited to the native drive rates (shown in table 15.1.1) because encrypted data blocks have a very low compaction ratio. IBM i 6.1 Performance Capabilities Reference - January/April 2008 © Copyright IBM Corp.
15.19 5XX Tape Device Rates Note: Measurements for the high speed devices were completed on a 570 4 way system with 2844 IOPs and 2780 IOA’s and 180 15K RPM RAID5 DASD units. The smaller tape device tests were completed on a 520 2 way with 75 DASD units.
34 19 R 34 19 S Network Storage Space 29 15 R 29 15 S Domino Mail Files 9 7 R 25 15 S Many Directories Many Objects 12 8 R 23 12 S 1 Directory Many Objects 32 37 R 32 39 S Large File 32GB 30 30 R 30 34 S User Mix 12GB 19 15 R 17 22 S Source File 1GB iV5R4 iV5R4M0 Release Measurements were done SLR60 from table 15.
15.20 5XX Tape Device Rates with 571E & 571F Storage IOAs and 4327 (U320) Disk Units Save/restore rates of 3580 Ultrium 3 (2Gb and 4Gb Fiber Channel) tape devices and of virtual tape devices were measured on a 570 8-way system with 571E and 571F storage adapters and 714 type 4327 70GB (U320) disk units.
15.21 5XX DVD RAM and Optical Library 9.8 9.8 9.8 9.8 9.6 9.6 R 2.6 2.6 2.0 2.0 1.8 1.8 S Network Storage Space 9.8 9.8 9.8 9.8 9.6 9.6 R 2.6 2.6 2.0 2.0 1.8 1.8 S Domino Mail Files 6.0 6.0 6.0 6.0 5.4 5.4 R 2.6 2.6 2.2 2.2 1.8 1.8 S Many Directories Many Objects 7.
15.22 Software Compression The rates a customer will achieve will depend upon the system resources available. This test was run in a very favorable environment to try to achieve the maximum rates. Software compression rates were gathered using the QSRSAVO API.
15.23 9406-MMA DVD RAM 14.0 14.0 R 2.2 2.2 S Network Storage Space 14.5 14.5 R 2.3 2.3 S Domino Mail Files 5.5 5.5 R 2.2 2.2 S Many Directories Many Objects 9.0 9.0 R 2.3 2.3 S 1 Directory Many Objects 45.0 14.0 R 8.0 2.2 S Large File 4GB 28.0 12.5 R 8.
15.24 9406-MMA 576B IOPLess IOA 700 700 750 650 650 650 450 R 700 330 650 550 580 575 450 S Domino Mail Files 26 26 27 27 27 28 26 R 38 38 40 40 40 40 40 S Many Directories Many Objects 50 50 50 50 50.
15.25 What’s New and Tips on Performance What’s New iV6R1M0 March 2008 BRMS-Based Save/Restore Software Encryption and DASD-Based ASP Encryption 576B IOPLess Storage IOA iV5R4M5 July 2007 3580 Ult.
Chapter 16 IPL Performance Performance information for Initial Program Load (IPL) is included in this section. The primary focus of this section is to present observations from IPL tests on different System i models. The data for both normal and abnormal IPL s are broken down into phases, making it easier to see the detail.
16.3 9406-MMA System Hardware Information 16.3.1 Small system Hardware Configuration 9406-MMA 7051 4 way - 32 GB Mainstore DASD / 30 70GB 15K rpm arms, 6 DASD in CEC Mirrored 24 DASD in a #5786 EXP24 .
16.4 9406-MMA IPL Performance Measurements (Normal) The following tables provide a comparison summary of the measured performance data for a normal and abnormal IPL.
16.6 NOTES on MSD MSD is Mainstore Dump. General IPL phase as it relates to the SRCs posted on the operation panel: Processor MSD includes the D2xx xxxx and C2xx xxxx right after the system is forced to terminate.
16.7 5XX System Hardware Information 16.7.1 5XX Small system Hardware Configuration 520 7457 2 way - 16 GB Mainstore DASD / 23 35GB 15K rpm arms, RAID Protected Software Configuration 100,000 spool fi.
16.8 5XX IPL Performance Measurements (Normal) The following tables provide a comparison summary of the measured performance data for a normal and abnormal IPL.
16.10 5XX IOP vs IOPLess effects on IPL Performance (Normal ) Measurement units are in minutes and seconds . 28:18 26:59 Total 2:52 2:32 OS/400 7:20 6:43 SLIC 18:06 17:44 Hardware iV5R4 GA7 Firmware 16 Way IOPLess 570 7476 256 GB 924 DASD iV5R4 GA7 Firmware 16 Way IOP 570 7476 256 GB 924 DASD Table 16.
Chapter 17. Integrated BladeCenter and System x Performance This chapter provides a performance overview and recommendations for the Integrated xSeries Server 4 , the Integrated xSeries Adapter and the iSCSI host bus adapter. In addition, the chapter presents some performance characteristics and impacts of these solutions on System i ™ .
Integrated xSeries Servers (IXS) An Integrated xSeries Server is an Intel processor-based server on a PCI-based interface card that plugs into a host system. This card provides the processor, memory, USB interfaces, and in some cases, a built-in gigabit E thernet adapter.
y Write Cache Property When the disk device write cache property is di sabled, disk operations have similar performance characteristics to shared disks. You may examine or change the “Write Cache” property on Windows by selecting disk “properties” and then the “Hardware tab”.
y With iSCSI, there are some Windows side disk configuration rules you must take into account to enable efficient disk operations. Windows disks should be configured as: 1 disk partition per virtual drive. File system formatted with cluster sizes of 4 kbyte or 4 k byte multiples.
2. Vary on any Network Server Description (NWSD) with a Network server connection ty pe of *ISCSI. During the iSCSI network server vary on processing the QFPHIS subsystem is automatically started if necessary. The subsystem will activate the private memory pool.
IXS and IXA I/O operations (disk, tape, optical and virtual Ethernet) communications occur through the individual IXS and IXA IOP resource. This IOP imposes a finite capacity. The IOP processor utilization may be examined via the iSeries Collection Services utilities.
2.5 MBytes 22.5 MBytes Total: 1 MByte 12 0.5 MByte QFPHIS Private Pool: 0.5 MByte 1 MByte Base Pool: 1 MByte 21 MBytes Machine Pool: For Each NWSD For Each Target HBA Warning: To ensure expected performance and continuing machine operation, it is critical to allocate sufficient memory to support all of the devices that are varied on.
CP W per 1k Dis k Op e r at ion s 0 10 0 20 0 30 0 40 0 50 0 60 0 512 Wr i te 1k W ri te 2k W ri te 4k W ri te 8k W ri te 16 k Wr i te 24 k Wr i te 32 k Wr i te 64 k Wr i te 51 2 R ead 1k R ea d 2k R .
y A storage space which is linked as shared, or a disk with caching disabled, requires more CPU to process write operations (approx. 45%). y Sequential operations cost approximately 10% less than the random I/O results shown above.
The blue square line shows an iSCSI connection with a single target iSCSI HBA - single initiator iSCSI HBA connection, configured to run with standard frames. The pink circle line is a single target iSCSI HBA to multiple servers and initiators running also running with standard frames.
than an IXS or IXA attached VE connection. “Stream” means that the data is pushed in one direction, with only the TCP acknowledge packets running in the other direction. 17.6.2 VE CPW Cost CPW cost below is listed as CPW per Mbit/sec. For the point to point connection, the results are different depending on the direction of transfer.
The chart above shows the CPW efficiency of operations (larger is better). Note the CPW per Mbits/sec scale on the left - as it’s different for each chart. For an IXS or IXA, the port-based VE has the least CPW or smaller packets due to consolidation of transfers available in Licensed Internal Code.
The legend label “Mixed Files” indicates a save of many files of mixed sizes - equiv alent to the save of the Windows system file disk. “Large files” indicates a save of many large files - in this case many 100MB files. F L BU S AV / RS T Ra te s 0.
Choose V5R4. In the “Contents” panel choose “iSeries Information Center”. Expand “Integrated operating environments” and then “Windows environment on iSeries” for Windows environment information or “Linux” and then “Linux on an integrated xSeries solution for Linux Information on an IXS or attached xSeries server.
Chapter 18. Logical Partitioning (LPAR) 18.1 Introduction Logical partitioning (LPAR) is a mode of machine operation where multiple copies of operating systems run on a single physical machine. A logical partition is a collection of machine resources that are capable of running an operating system.
y Allocate fractional CPUs wisely. If your sizing indicates two partitions need 0.7 and 0.4 CPUs, see if there will be enough remaining capacity in one of the partitions with 0.6 and 0.4 or else 0.7 and 0.3 CPUs allocated. By adding fractional CPUs up to a "whole" processor, fewer physical processors will be used.
The reasons for the LPAR overhead can be att ributed to contention for the s hared memory bus on a partitioned system, to the aggregate bandwi dth of the standalone systems being gre ater than the bandwidth of the parti tioned system, and to a lower number of system resources configured for a system partition than on a standalone system.
Also note that part of the performance increase of an larger system may have come about because of a reduction in contention within the CPW workload itself. That is, the measurement of the stan dalone 12-way system required a larger number of users to drive the system’s CPU to 70 percent than what is required on a 4-way system.
LPA R Throughput Increase 12-way 8-way+4-way 2 x 6-way 3 x 4-way LPA R Configuration 4600 4700 4800 4900 5000 5100 5200 5300 5400 Total CPW of all Partitions Total Increase in CPW Capacity of an LPAR Sy stem 7% 9% 13% Figure 18.
18.4 LPAR Measurements The following chart shows measurements taken on a partitioned 12-way system with the syste m’s CPU utilized at 70 percent capacity. The system was at the V4R4M0 release level. Note that the standalone 12-way CPW value of 4700 in our meas urement is higher than the published V4R3M0 CPW value of 4550.
The following chart shows pro jected LPAR capacities for several LPAR configurations. The projections are based on measurements on 1 and 2 way measurements when the system’s CPU was utilized at 70 percent capacity. The LPAR ove rhead was also factored into the projections.
Chapter 19. Miscellaneous Performance Information 19.1 Public Benchmarks (TPC-C, SAP, NotesBench, SPECjbb2000, VolanoMark) iSeries systems have been represented in several public performance benchmarks. The purpose of these benchmarks is to give an indication of relative strength in a general field of computing.
The most commonly run of these is the SAP-SD (Sales and Distribution) benchmark. It can be run in a 2-tier environment, where the application and database reside on the same system, or on a 3-tier environment, where there are many application servers feeding into a database server.
This web site is primarily focused on results for systems that the Volano company measures themselves. These results tend to be for much smaller, Intel-based systems that are not comparable with iSeries servers.
of relatively lower delay cost. y Waiting Time The waiting time is used to determine the delay cost of a job at a particular time. The waiting time of a job which affects the cost is the time the job has been waiting on the TDQ for execution. y Delay Cost Curves The end-user interface for setting job priorities has not changed.
y Priority 47-51 y Priority 52-89 y Priority 90-99 Jobs in the same group will have the same resource (CPU seconds and Disk I/O requests) usage limits. Internally, each group will be associated with one set of delay cost curves. This would give some preferential treatment to jobs of higher user priorities at low system utilization.
less CPU utilization resulting in slightly lower transaction rates and slightly longer response times. However, the batch job gets more CPU utilization and consequently shorter run time. y It is recommended that you run with Dynamic Priority Scheduling for optimum distribution of resources and overall system performance.
of printers in the configuration. 70% of the remaining memory is allocated to the interactive pool; 30% to the base pool. A QPFRADJ value of 1 ensures that memory is allocated on the system in a way that the system will perform adequately at IPL time.
files of differing characteristics are being accessed. The pool attribute can be changed from *FIXED to *CALC and back at any time, so making a change and evaluating its affect over a period of time is a fairly safe experiment. More information about Expert Cache can be found in the Work Management guide.
To determine a reasonable level of page faulting in user pools, determine how much the paging is affecting the interactive response time or batch throughput. These calculations will show the percentage of time spent doing page faults. The following steps can be used: (all data can be gathered w/STRPFRMON and printed w/PRTSYSRPT).
NOTE : It is very difficult to predict the improvement of adding storage to a pool, even if the potential gain calculated above is high. There may be instances where adding storage may not improve anything because of the application design. For these circumstances, changes to the application design may be necessary.
0 100 200 300 400 500 600 Number of PC Clients 0 20 40 60 80 100 120 140 160 180 200 220 240 Total Collection Time (min) AS/400 NetFinity Software Inventory Performance AS/400 510-2142 Token Rings TPC/IP V4R1 About 100 client s were collecte d in 42 minutes Figure 19.
Conclusions/Recommendations for NetFinity 1. The time to collect hardware or software information for a number of clients is fairly linear. 2. The size of the AS/400 CPU is not a limitation. Data collection is performed at a batch priority. CPU utilization can spike quite high (ex.
Chapter 20. General Performance Tips and Techniques This section's intent is to cover a variety of useful topics that "don't fit" in the document as a whole, but provide useful things that customers might do or deal with special problems customers might run into on iSeries.
Problem It is too easy to use the overall pool's value of MAXACT as a surrogate for controlling the number of Jobs. That is, you can forget the distinction between jobs and threads and use MAXACT to control the activity in a storage pool. But, you are not controlling jobs; you are controlling threads.
20.2 General Performance Guidelines -- Effects of Compilation In general, the higher the optimization, the less easy the code will be to debug. It may also be the case that the program will do things that are initially confusing. In-lining For instance, suppose that ILE Module A calls ILE Module B.
20.3 How to Design for Minimum Main Storage Use (especially with Java, C, C++) The iSeries family has added popular languages whose usage continues to increase -- Java, C, C++. These languages frequently use a different kind of storage -- heap storage.
Where a and b are constants. “a” is determined by adding up things like the static storage taken up by the application program. “b” is the size of the data base record plus the size of anything else, such as a Java object, that is created one entity per data base record.
SQL records in a result set Program stack storage Java Virtual Machine and most WebSphere storage System values Operating System copies (e.g. Data Base) copies of application records SQL Result Set (nonrecord) Static storage from RPG and COBOL. Static final in Java.
How practical this change would be, if it represented a large, existing data base, would be a separate question. If this is at the initial design, however, this is an easy change to make. Boundary considerations. In Java, we are done because Java will order the three entities such that the least amount of space is wasted.
One thing easily misunderstood is variable length characters. At first, one would think every character field should be variable length, especially if one codes in Java, where variable length data is the norm.
20.4 Hardware Multi-threading (HMT) Hardware multi-threading is a facility present in several iSeries processors. The eServer i5 models instead have the Simultaneous Multi-threading ( SMT) facility, which are discussed in the SMT white paper at the following website: http://www-1.
HMT and SMT Compared and Contrasted Some key similarities and differences are: y SMT can improve throughput up to 40 per cent, in rare cases, higher. y HMT typically improves throughput by 10 to 25 per cent. y SMT can allow QPRCMLTTSK to change at any time.
20.5 POWER6 520 Memory Considerations Because of the design of the Power6 520 system, there are some key factors with the memory subsystem that one should keep in mind when sizing this system. The Power6 520, unlike the Power6 570, has no L3 cache, which does have an effect on memory sensitive workloads, like Java applications for instance.
activation time. This means that a partition that requires 4 GB of memory could be assigned 2 GB from the quad with 4 GB DIMMs and the other 2 GB from the quad with 8 GB DIMMs. This too can cause an application to have different performance characteristics on partitions configured with exactly the same amount of resources.
floating-point data may be copied using the floating-point loads and store, resulting in an alignment interrupt. As an example, consider the following structures, one specifying "packed" and the other allowed to be aligned per the compiler.
Chapter 21. High Availability Performance The primary focus of this chapter is to present data that compares the effects of high availability scenarios using different hardware configurations. The data for the high availability test are broken down into two different categories which include Switch able IASP’s, and Geographic Mirroring.
· Inactive switchover - The switching time is measured from the point at which the CHGCRGPRI command is issued from the primary system which has no work until the I ASP is available on the new primary system. · Partition - An active partition is created by starting the database workload on the I ASP.
Switchover Measurements NOTE: The information that follows is based on performance measurements and analysis done in the Server Group Division laboratory.
Active State: In geographic mirroring, pertaining to the configuration state of a mirror copy that indicates geographic mirroring is being performed, if the IASP is online. Workload Description Synchronization: This workload is performed by starting the synchronization process on the source side from an unsynchronized geographic mirrored I ASP.
Workload Configuration The wide variety of hardware configurations and software environments available make it difficult to characterize a ‘typical’ high availability environment and predict the results. The following section provides a simple description of the high availability test.
Geographic Mirroring Measurements NOTE: The information that follows is based on performance measurements and analysis done in the IBM Server Group Division laboratory. Actual performance may vary significantly from this test. Synchronization on an idle system: The following data shows the time required to synchronize 1 t erabyte of data.
Geographic Mirroring Tips • For a quicker switchover time, keep the user-ID (UID) and group-ID (GID) of user profiles that own objects on the IASP the same between nodes of the cluster group. Having different UID’s lengthens vary on times. • Geographic mirroring is optimized for large files.
Chapter 22. IBM Systems Workload Estimator 22.1 Overview The IBM Systems Workload Estimator (a.k.a., the Estimator or WLE) , located at: http://www.ibm.com/systems/support/tools/estimator , is a web-based sizing tool for System i, System p, and System x.
typical disclaimers that go with any performance estimate ("your experience might vary...") are especially true. We provide these sizing estimates as general guidelines only. 22.2 Merging PM for System i data into the Estimator The Measured Data workload of the Estimator is designed to accept data from various data sources.
account features like detailed journaling, resource locking, single-threaded applications, time-limited batch job windows, or poorly tuned environments.
Appendix A. CPW and CIW Descriptions "Due to road conditions and driving habits, your results may vary." "Every workload is different." These are two hallmark statements of measuring performance in two very different industries. They are both absolutely correct.
CPW Application Description The CPW application simulates the database server of an online transaction processing (OLTP) environment. Requests for transactions are received from an outside source and are processed by application service jobs on the database server.
A.2 Compute Intensive Workload - CIW Unlike CPW values, CIW values are not derived from specific measurements of a single workload. They are modeled projections which are based upon the characteristics of internal workloads such as Domino workloads and application server environments such as can be found with SAP or JDEdwards applications.
category that often fits into the CIW-like classification is overnight batch. Even though batch jobs often process a great deal of database work, there are relatively few jobs which means there is little switching of jobs from processor to processor. As a result, overnight batch data processing jobs sometimes act more like compute-intensive jobs.
Appendix B. System i Sizing and Performance Data Collection Tools The following section presents some of the alternative tools available for sizing and capacity planning. (Note: There are products from vendors not included here that perform similar functions.
B.1 Performance Data Collection Services Collecting performance data with Collection Serv ices is an operating system function designed to run continuously that collects system and job level performance data at regular intervals which can be set from 15 seconds to 1 hour.
predefined profile containing commonly used categories. For example, if you do not hav e a need to monitor the performance of SNADS transaction data on a regular basis, you can c hoose to turn that category off so that S NADS transaction data is not collected.
http://www.ibm.com/servers/eserver/iseries/perfmgmt/batch.html Unzip this file, transfer to your System i platform as a save file and restore library QBCHMDL. Add this library to your library list and start the tool by using the STRBCHMDL command. Tips, disclaimers, and general help are available in the QBCHMDL/ README file.
Appendix C. CPW and MCU Relative Performance Values for System i This chapter details the relative system performance values: y Commercial Processing Workload ( CPW ).
C.1 V6R1 Additions (October 2008) C.1.1 CPW values for the IBM Power Systems - IBM i operating system 77600 56800 40300 21600 11000 2x4MB / 32MB 5.0 7388 570 (9117-MMA) 70000 51500 36200 19400 9850 2x4MB / 32MB 4.
2. Memory speed differences account for some slight variations in performance difference between models. 3. CPW values for Power System models introduced in October 2008 were based on IBM i 6.1 plus enhancements in post-release PTFs. C.1.4 CPW values for IBM Power Systems - IBM i operating system 9200-32650 2 - 8 2x4MB / 32MB 4.
4800-18000 1 - 4 2x4MB / 32MB 4200 4966 550 (9409-M50) 4300-8300 1 - 2 2x4MB / 0MB 4200 5634 520 (9408-M25) 4300 1 2x4MB / 0MB 4200 5633 520 (9407-M15) Processor CPW CPU (2) Range L2/L3 cache (1) per chip Chip Speed MHz Processor Feature Model Table C.
13800 3.7 of 4 (3) 2x4MB / 0 MB 4000 52BE n/a n/a JS22 (7998-61X) 11040 3 of 4 (2) 2x4MB / 0 MB 4000 52BE n/a n/a JS22 (7998-61X) Processor CPW CPUs L2/L3 cache (1) per chip Chip Speed MHz Processor Feature Edition Feature Server Feature Blade Model Table C.
6100 2800 2800 1 (3) 1.9/36MB 1900 NA 7735 9406-520 6100 2800 2800 1 (3) 1.9/36MB 1900 NA 7374 (5) 9406-520 8200 0 3800 1 1.9/36MB 1900 NA 7691 (10) 9406-520 8200 0 3800 1 1.9/36MB 1900 NA 7784 9406-520 8200 - 15600 0 3800-7100 1 - 2 1.9/36MB 1900 NA 7785 9406-520 8200 - 15600 3800-7100 3800-7100 1 - 2 1.
NR - 6600 (9) 30 600-3100 9 1 (3) 1.9MB/NA 1900 7680 7140 9405-520 NR - 6600 (9) 30 600-3100 9 1 (3) 1.9MB/NA 1900 7681 7141 9405-520 NR - 6600 (9) 30 600-3100 9 1 (3) 1.9MB/NA 1900 7682 7142 9405-520 NR - 6600 (9) 30 600 - 3100 9 1 (3) 1.9/NA 1900 7353 7156 9405-520 2600 - 8200 (9) 60 1200-3800 9 1 (3) 1.
NA recommended 30 500 1 (3) NA 1.9 MB 1500 520-0900 (7450) 2300 60 1000 1 (3) NA 1.9MB 1500 520-0901 (7451) 2300 1000 1000 1 (3) NA 1.9 MB 1500 520-0902 (7552) 5 2300 0 1000 1 (3) NA 1.9 MB 1500 520-0902 (7458) 2300 1000 1000 1 (3) NA 1.9 MB 1500 520-0902 (7459) ) 5500 2400 2400 1 NA 1.
8. The 64-way is measured as two 32-way par titions since i5/OS does not support a 64-way partition. 9. IBM stopped publishing CIW ratings for iSeries after V5R2. It is recommended that the IBM Systems Workload Estimator be used for sizing guidance, available at: http://www.
C.8.2 Model 810 and 825 iSeries for Domino (February 2003) 3100 380 0 1020 1 2 MB 540 810-2466 (7407) 4200 530 0 1470 1 4 MB 750 810-2467 (7410) 7900 950 0 2700 2 4 MB 750 810-2469 (7428) 11600 na 0 na 4 1.41 MB 1100 825-2473 (7416) 17400 2890 0 6600 6 1.
10680 - 20910 1630 - 3220 4550 4200-7350 4 - 8 4 MB 540 830-2349 (1537) 10680 - 20910 1630 - 3220 2000 4200-7350 4 - 8 4 MB 540 830-2349 (1536) 10680 - 20910 1630 - 3220 1050 4200-7350 4 - 8 4 MB 540 .
C.10.1 Model 8 xx Servers 77800 10950 2000 20200 24 16 MB 600 840-2461 (1544) 77800 10950 1050 20200 24 16 MB 600 840-2461 (1543) 77800 10950 560 20200 24 16 MB 600 840-2461 (1542) 77800 10950 240 202.
77800 10950 20200 20200 24 16 MB 600 840-2461 (1548) 77800 10950 16500 20200 24 16 MB 600 840-2461 (1547) 77800 10950 10000 20200 24 16 MB 600 840-2461 (1546) 77800 10950 4550 20200 24 16 MB 600 840-2461 (1545) MCU Processor CIW Interactive CPW Processor CPW CPUs L2 cache per CPU Chip Speed MHz Model Table C.
C.10.4 Capacity Up grade on-demand Models New in V4R5 (December 2000) , Capacity Upgrade on Demand (CUoD) capability offered for the iSeries Model 840 enables users to start small, then increase processing capacity without disrupting any of their current operations.
59600 - 77800 8380 - 10950 20200 16500 - 20200 18 - 24 16 MB 600 840-2354 (1548) 59600 - 77800 8380 - 10950 1 6500 16500 - 20200 18 - 24 16 MB 600 840-2354 (1547) 59600 - 77800 8380 - 10950 10000 1650.
C.11 V4R5 Additions For the V4R5 hardware additions, the tables show each new server model characteristics and its maximum interactive CPW capacity. For previously existing hardware, the tables show f.
16500 16500 24 8 MB 500 840-2420 (1547) 10000 16500 24 8 MB 500 840-2420 (1546) 4550 16500 24 8 MB 500 840-2420 (1545) 2000 16500 24 8 MB 500 840-2420 (1544) 1050 16500 24 8 MB 500 840-2420 (1543) 560.
C.11.4 SB Models 120 16500 24 8 MB 500 SB3-2318 120 10000 12 8 MB 500 SB3-2316 70 7350 8 4 MB 540 SB2-2315 Interactive CPW Processor CPW* CPUs L2 cache per CPU Chip Speed MHz Model Table C.
5308.3 4550 4550 12 8 MB 262 740-2070 (1513) 4270 3660 4550 12 8 MB 262 740-2070 (1512) 2333.3 2000 4550 12 8 MB 262 740-2070 (1511) 1225 1050 4550 12 8 MB 262 740-2070 (1510) 140 120 4550 12 8 MB 262 740-2070 (Base) 4270 3660 3660 8 8 MB 262 740-2069 (1512) 2333.
Note: the CPU not used by the interactive workloads at their Max CPW is used by the system CFINTnn jobs. For example, for the 2386 model the interactive wor kloads use 17.8% of the CPU at their maximum and the CFINTnn jobs use the remaining 82.2%. The processor workloads use 0% CPU whe n the interactive workloads are using their maximum value.
C.13 AS/400e Model Sxx Servers For AS/400e servers the knee of the curve is about 1/3 the maximum interactive CPW value. 0.9 2.7 21.3 64 2340 12 2261 1.2 3.6 21.3 64 1794 8 2256 0.8 2.6 40 120 4550 12 2208 1.1 3.2 40 120 3660 8 2207 S40 1.2 3.6 21.3 64 1794 8 2260 2.
2.6 7.7 10.9 32.2 650.0 4 n/a 2157 3 9 10.7 32.2 598.0 4 n/a 2156 4.5 13.5 10.7 32.2 319.0 2 n/a 2155 6.8 20.3 15.9 32.2 188.2 1 n/a 2154 53S 8.9 23.8 12.0 32.2 138.0 1 n/a 2122 10 30 10.7 32.2 111.5 1 n/a 2121 9.3 27.8 8.1 22.5 81.6 1 n/a 2120 9.9 29.
238,073.64 95,229.46 4.0B 396,789.40 158,715.76 3.1H 12 2313 164,655.74 65,862.29 4.0B 274,426.23 109,770.49 3.1H 8 2312 FI ds/hr @ 65% CPU Utilization SD ds/hr @ 65% CPU Utilization SAP Release CPUs Model Table C.16.1 AS/400e Custom Application Server Model SB1 C.
C.18 AS/400 CISC Model Capacities 16.8 3.93 56 1 2117 9.6 3.93 40 1 2115 7.3 2.99 24 1 2114 P03 7.3 2.1 16 1 n/a P02 CPW Disk (GB) Maximum Memory (MB) Maximum CPUs Feature Model Table C.18.1 AS/400 CISC Model: 9401 9.6 8.2 40 1 F06 7.3 7.9 40 1 E06 7.
177.4 256 1536 4 F97 148.8 256 1280 4 F95 127.7 256 1024 3 F90 116.6 256 1152 4 E95 97.1 256 768 2 F80 96.7 256 1024 3 E90 69.4 256 512 2 E80 57.0 256 512 1 F70 56.6 256 384 2 D80 40.0 146 384 1 F60 39.2 146 256 1 E70 32.3 146 256 1 D70 28.1 146 192 1 E60 27.
デバイスIntel AS/400 RISC Serverの購入後に(又は購入する前であっても)重要なポイントは、説明書をよく読むことです。その単純な理由はいくつかあります:
Intel AS/400 RISC Serverをまだ購入していないなら、この製品の基本情報を理解する良い機会です。まずは上にある説明書の最初のページをご覧ください。そこにはIntel AS/400 RISC Serverの技術情報の概要が記載されているはずです。デバイスがあなたのニーズを満たすかどうかは、ここで確認しましょう。Intel AS/400 RISC Serverの取扱説明書の次のページをよく読むことにより、製品の全機能やその取り扱いに関する情報を知ることができます。Intel AS/400 RISC Serverで得られた情報は、きっとあなたの購入の決断を手助けしてくれることでしょう。
Intel AS/400 RISC Serverを既にお持ちだが、まだ読んでいない場合は、上記の理由によりそれを行うべきです。そうすることにより機能を適切に使用しているか、又はIntel AS/400 RISC Serverの不適切な取り扱いによりその寿命を短くする危険を犯していないかどうかを知ることができます。
ですが、ユーザガイドが果たす重要な役割の一つは、Intel AS/400 RISC Serverに関する問題の解決を支援することです。そこにはほとんどの場合、トラブルシューティング、すなわちIntel AS/400 RISC Serverデバイスで最もよく起こりうる故障・不良とそれらの対処法についてのアドバイスを見つけることができるはずです。たとえ問題を解決できなかった場合でも、説明書にはカスタマー・サービスセンター又は最寄りのサービスセンターへの問い合わせ先等、次の対処法についての指示があるはずです。