I also think it might require certain CPU feature(s) that aren't available.
I can run OTS 9.4 on i7-7*** (not on the list, but it works and it's a fairly new CPU). I did some CPU-related errors in console when OTS was booting, but none fatal.
I also think it might require certain CPU feature(s) that aren't available.
I can run OTS 9.4 on i7-7*** (not on the list, but it works and it's a fairly new CPU). I did some CPU-related errors in console when OTS was booting, but none fatal.
I have installed the 7.3.6 simulator onto a Linux host running RedHat that sits on top of Hyper-V.
The Linux host has 2 interfaces.
1 I have put downm and is in promiscuous mode.
I have setup the simulator fine, but cannot get any network connectivity.
Does anyone have any ideas regarding subnets, netmasks etc ....
I originally created dns entries for 2 new entries for ns0 and ns1 on the simulator, and selected the down interface on the underlying Linux host during the setup.
That did not work.
So, I tried using the same I.P address on ns0 as the down interface on the lInix host.
Still doesn't work.
Is there something that needs to be set on Hyper-V I have missed, or on the Linux host ?.
Thanks.
Hi there!
The 7.3 simulation environment is no longer anywhere near current. For 7-mode simulation, we have 8.2 simulators available.
However, I would strongly suggest you do any learning on our newer ONTAP 9 environment simulators.
Hope this helps!
Hi,
the document describing the set setup of the simulator cover only Windows and Mac, however I would like to run the simulator on a VMWare player on a linux (debian) system.
Is there any reason this should not work?
And what would be the minimal hardware requirements (could you make do with only 4 GB of RAM?)
Many thanks!
Hi,
that sounds very interesting, I am about to try that myself.
Could you describe how you did it?
Many thanks.
I wrote a script that works perfectly in a powershell runspace but when run in SCOM as a monitor, the Connect-NcController returns a $null value.
I can run under the local system context or under my user credentials context so it is not a search path or anything like that. The only thing I can find is that when SCOM creates the powershell runspace, it has a $host.Version of "7.0.5000.0" instead of "4.0".
the code that works in the 4.0 space but not in the 7.0.5000.0 space is as follows:
$Password=ConvertTo-SecureString -AsPlainText -Force $UserPassword
$Creds=New-Object System.Management.Automation.PSCredential( $UserName,$Password)
$Controller=Connect-NcController -name $ControllerName -credential $Creds
Does anyone know what is different in the SCOM implementation of powershell versus the standard powershell runspace that causes a $null value to be returned. I have the cmdLet in a try/catch but there is no error so the catch portion of the code never executes. I could at least work with an error but I just get back a $null value which I cannot work with.
Our storage team does not have any vested interest in using plug-in for scom since it woudl monitor the whole solution but we have an interest in monitoring the CIFS shares and the volumes for alerting and reporting and thus why I went this route. I can come up with a work-around but was hoping to find a more forthright solution.
Craig
I'm getting familiar with Netapp by using the simulator in my home lab with 9.4. Currently I have a 2 node cluster. I found the ontap 9 High Availability guide. Based on the instructions I have booted node 1 into Maintenance Mode. Now trying to run ha_config commands (ha_config show, ha_config modify controller ha-state etc.). It says "ha_config commands not supported on this configuration". Is it possible to run the simulator in HA mode to test the HA features out? I need to have a good understanding of how HA works. Thanks.
Hi
I suggest use https://www.flackbox.com/ to see how to install simulator, they also have youtube video on how to install simulator.
BTW this simulator does not support HA. Please read Simulate ONTAP 9.4 Installation and Setup Guide and check out page 6.
Simulate ONTAP does not support the following features:
High Availability (CFO/SFO)
Fibre channel and SAN connectivity
RLM (Remote LAN Module)
CFE, BIOS, shelf FW, and so on
Multipathing
Thanks so much for your reply about page 6 of the Installation and setup guide for simulator. This perfectly answers my question - HA not available on simulator. I've seen the training from Flackbox and I love it. Problem is I need to learn the 9.4 GUI and all his current training is on 8x. I'm waiting for him to get out his 9x stuff.
I think this community can help . let us know what your specifially looking in 9.4.
GUI is a very user friendly driven tool.. click on each option and exlpore it
You can run HA in ONTAP Select, which you can spin up in 90 day eval mode without a license. Get it from the evaluations section of the mysupport downloads page.
HA in ONTAP select uses a shared nothing storage model, so its a bit different architecturally than the shared disk model used in the hardware appliances, but it does support takeover/giveback operations and automated NDU workflows just like the hardware appliances do.
Hi..
Need to ask a help here related my test lab using ontap simulator 9.4 which is my currently in testing to do a snapmirror exercise. Now i stuck on part on SVM peering.
Still playing with on peer configuration part and I've setup a cluster1 and cluster2 Select instance.
I've setup cluster peering and it looks healthy.
I'm trying to setup SVM peering so I can try to get an SVM on my cluster1 to replicated to my cluster2
I can see the cluster but I simply see "no permitted SVMs" and I've not tracked down any kb article that seems to tell me why.
The source SVM is basic with just a single volume and NFS and CIFS protocol enabled.
The destination SVM that I created is the same.
Any thought what the point i missing? Thank
Hi,
I have a vSIM 9.3 setup and running on my test lab, recent check I found there is some issue with it as the screenshot as attached. This is actually the 2nd time it had happened. Not sure if this could be fixed instead of rebuilding them.
So its actually running on VMware ESXi 6.5.
Thanks
Can you please increase the virtual machine RAM to 5.1GB if running single node otherwise 5.1GB multiply number of nodes.
there RAM is already assigned at 5.1GB, I had increased it to 8GB but still failed to boot up.
From what I see in the screenshot, you are using 9GB simulated disks. If the SIM has been in use for a while, you may have filled the virtual disk that the simulated disks are stored on.
You can check this by stopping it at the boot menu, entering the systemshell, and running df to see the available space on the /sim filesystem
Please choose one of the following: (1) Normal Boot. (2) Boot without /etc/rc. (3) Change password. (4) Clean configuration and initialize all disks. (5) Maintenance mode boot. (6) Update flash from backup config. (7) Install new software first. (8) Reboot node. (9) Configure Advanced Drive Partitioning. Selection (1-9)? systemshell Forking /bin/sh pid: 2441 # df -h Filesystem Size Used Avail Capacity Mounted on /dev/md0.uzip 998M 844M 153M 85% / /dev/ad0s2 1.9G 840M 1.0G 44% /cfcard devfs 1.0K 1.0K 0B 100% /dev /dev/md1.uzip 144M 143M 1.2M 99% /platform /dev/md2 31M 60K 28M 0% /tmp/dev/ad3 223G 4.5G 201G 2% /sim /dev/ad1s1 4.8M 1.1M 3.3M 26% /var procfs 4.0K 4.0K 0B 100% /proc
If your /sim file system is filling up, the default size of the IDE1:1 disk is too small for your simulated disk layout. You can either provision fewer or smaller simulated disks, or you can replace IDE1:1 with a larger disk before the very first power on of a new simulator.
Yes, Im guessing the disk is full too based on the error messages i managed to catch earlier during boot.
I have already increase the disk previously during my initial deployment. Correct me if I'm wrong, vsim only supports up to maximum for 550GB right?
please share your details how you got 9.4 Simulator running in KVM
Thanks
The raw sim disk file size could be as much as 9.9GB, depending on the disk type, for up to 56 disks. I round up to 600GB just to leave some buffer. There are a few other odds and ends on the /sim partition, along with its file system overhead.