1. What is the shared application file system?
Oracle EBS 11i has following mid-tier application servers: web server, forms server, MWA server, concurrent manager and report server, and administration server in our environment. The application code base is scattered on different servers. For example, web, forms and MWA servers code is on web/form tier and concurrent manager and report server, and adminstration server code is on ccm/admin node. For the shared application file system, all servers code is installed on a single storage media. This storage location is then mounted onto different physical nodes. We can configure each node to run specific application servers.
2. Advantages to implement the shared application file system
2.1 enhance DBA's productive and shorten downtime for maintenance
DBAs only need to run adpatch for applying patches and adadmin for system maintenance once on the shared application file system rather than four times now. Furthermore, we can utilize the distributed AD to shorten adpatch and adadmin sessions more.
2.2 reduce storage and tape usage and shorten tape backkup time
As all the application codebase is on the shared media, the codebase duplication will be greatly reduced. The tape backup for the application code only needs done once from any physical node rather than four times now.
2.3 code migration will be easier
Currently, we have to keep forms/reports code in synch for the customized code among two web/forms and two ccm nodes respectively. With the shared application file system, we only need to migrate the code into the shared location and all servers will see the new changes.
2.4 shorten cloning
Currently, we have to run preclone procedure on web/forms and ccm/admin node. Then we have to take a backup on the respectively node. Then we need to move all code to destination nodes. Run untar, adcfgclone and other commands on all the servers. With the shared application file system, we only need to run once.
2.5 easy route to virtual servers
As all the application code is on one location, we can put application server tiers on virtual servers.
2.6 easy to expand capacity
It is much easy to add a new node to expand capacity for the shared application file system with small or no downtime.
3. Disadvantages for the shared application file system
Only concern is that as the codebase is on the shared storage media, in case the shared storage media fails, the whole system gone. As we are not using the local disk for the application codebase now, storage media failure will create a problem too.
4. Concerns
4.1 NFS mount is slower for running rm and tar commands. However, we only need to run these commands during cloning or server migration. For the cloning part, as we only need to run from one node rather than from four nodes, the slowness is not a problem.
4.2 Cloning before PROD got shared application file system implementation
We will keep the original mounting points. In case we need to clone DEV/INT/GOLD, we will follow the current procedure. After PROD implementation, we will update the clone procedure.
5. pre-requisites
5.1 need new shared mounting point: /mnt/appldev, /mnt/applmgr, /mnt/applprod etc with a disk space size of 75GB for each environment.
5.2 create appldev, applmgr, applprod user on current web/forms tier
5.3 create /opt/oracle/appldev, applmgr, applprod on all apps nodes.
6. plan
After the pre-requisites met, we can begin the implementation. For PTCH, I spent about 13 hours. We could shorten the downtime more. Also, we do not need to shutdown database. We also have easy fall back plan in case the shared application file system approach does not work.
We will start from DEV and then GOLD. Let us test it on GOLD for about 2 months before migrating into PROD. As we do not have any code changes, we do not need a full cycle for testing.
Some background: web/forms/MWA tier: two nodes; concurrent manager/admin server: tow nodes; RAC database: two nodes.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment