Mac network accounts are unavailable – macOS Sierra, High Sierra

Applies to: macOS Sierra, macOS High Sierra, Active Directory 2008 R2 functional level and greater, Windows Security Baselines for Active Directory

Our environment currently consists of Mac computers that are bound to Active Directory. Recently we deployed some new Active Directory 2016 domain controllers within our environment.  These domain controllers also have a Windows Security Baseline applied as a GPO for security purposes. Windows Security Baselines can be found here.

We immediately started to see issues with Mac computers related to the all familiar “Network Accounts are Unavailable” error message at login screen.


After extensive troubleshooting, we determined that the problem was with the Windows Security Baselines that were being applied to the domain controllers. And more specifically this setting in particular:

Domain controller: LDAP server signing requirements 
Value =Require signing

Here is the link to the reference article for this security setting.

So by default, the macOS Directory client does not sign and encrypt the LDAP connections that are used to communicate with Active Directory.  The Open Directory client can sign and encrypt LDAP connections with the following configurations:

dsconfigad -packetencrypt ssl


/usr/bin/security add-trusted-cert -d -p basic -k /Library/Keychains/System.keychain <path/to/certificate/file>

These commands are defined in the Packet signing and encryption section of the following apple support article:

Hope this helps!

Posted in Active Directory, macOS, Security, Windows Security Baselines | Tagged , , , , | Leave a comment

Kemp LoadMaster RESTful API Mangement with Powershell

There are several articles explaining how to access and manage the Kemp LoadMaster with the RESTful API. However, there are not very many articles that show you how to connect via Powershell. I attempted to follow several articles, but I kept running into problems.  Primarily the following:

“Invoke-RestMethod : The underlying connection was closed: An unexpected error occurred on a send.”

Everything that I did could not solve this problem. I found several articles stating that I would need to tell powershell to ignore certificate problems or force powershell to use TLS 1.2 before calling the invoke-restmethod cmdlet.

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

However, this did not work for me at all!  The problem is related to the invoke-restmethod and invoke-webrequest cmdlets run in their own runspace.

Follow this procedure!

Powershell code to connect to the Kemp Loadmaster and list the virtual services

$pass = Get-Content "c:\scripts\KempPassword.txt" | ConvertTo-SecureString
$User = "YourKempUserAccount"
$MyCredential=New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User, $pass>
$kempurl = ""
$uri = $kempurl+"/access/listvs"
[string]$response = Invoke-RestMethod $uri -Credential $MyCredential


In the example above, I am using a encrypted password that was saved in a KempPassword.txt file. You can generate this password by executing:

"P@ssword1" | ConvertTo-SecureString -AsPlainText -Force | ConvertFrom-SecureString

The Kemp LoadMaster will return an xml response that will need to be parsed as required. Here are two websites that I used to parse the xml in powershell:



Previous Articles on the Kemp API and Powershell (did not work for me)

Kemp’s API and Powershell documentation



Posted in API Programming, Kemp LoadMaster, Powershell | Tagged , | Leave a comment

Graylog REST API – Creating User Token with Powershell

In order to access the Graylog REST api you need to do the folllowing:

  1. Create a new user within the User UI
  2. Temporarily assign the user the Admin role (Permission needed to create a token) , or use the REST API to assign the users:tokenlist, users:tokencreate, and users:tokenremovepermissions at this link.
  3. Using powershell, execute the following commands
#The exact username and password for the new user created
$hash= [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes("$($username):$($password)"));
#Define your server uri. Note: myuser is name of my new user and mytoken is the name of new token
$uri = ''
#Create the token, and save the response into the $token variable
$token = Invoke-RestMethod -Uri $uri -Method POST -Headers @{"Content-Type"="application/json";"Authorization" = "Basic $hash"}
#Now output the token and save it

3. Now that you have created the token, remove the Admin role from the the User UI for your api user.


Posted in Graylog, Powershell | Tagged | Leave a comment

Office 365 Address list Recipients Missing

In my organization we rely heavily on Address Lists within Exchange Online. For several years people have been able to find other people within the organization by accessing the Address book within Outlook or OWA, select the department and find the person that they are looking for.

We have accomplished this by simply creating an address structure that looks similar to the following:

"\Departments" -Addresslist under the root that does not contain any recipients

All of our Department address lists have a recipient filter similar to the following:

RecipientFilter: MemberOfGroup -eq 'CN=department,,OU=Microsoft Exchange Hosted Organizations,DC=NAMPR04A092,DC=PROD,DC=OUTLOOK,DC=COM'


Then one day we noticed that all of our Address Lists no longer contained any recipients!

I contacted Microsoft and spent nearly 1 1/2 days on the phone with no resolution. After analyzing my addresslists, I noticed a discrepancy between the RecipientFilter and the LdapRecipientFilter.  It has now appeared that Microsoft moved all of my Distribution groups to a new location within their Azure AD domain infrastructure.  They did update the RecipientFilter for all of my addresslists to point to the new DistinguishedName of my Distribution Groups , but somehow this did not prevent them from breaking.



The resolution for me was to completely remove all of our custom address lists and re-create them.  Even though the RecipientFilter was correct, the LdapRecipientFilter was not correct.  And you cannot set the LdapRecipientFilter using the set-addresslist cmdlet.

#Remove all custom address lists under Departments
Get-AddressList "\Departments\*" | Remove-AddressList -Recursive

Once all the addresslists had been removed, I simply recreated them using an existing powershell script that is scheduled to run nightly. This powershell script checks for new on-premise distribution groups (mail-enabled security groups, synced with Azure AD), and then creates an address list if it doesn’t already exist.

The final step was to  “Tickle” each mailbox and mailuser as described in several articles.

I hope this helps anyone with this problem. Let me know if you came across this problem and what you may have done to fix it. Thanks!

Posted in Exchange Online, Office 365 | Tagged , , | Leave a comment

Installing Cachet on Red Hat Enterprise Linux 7 (RHEL)

Below is my attempt at documenting the steps that I performed to successfully install Cachet on RHEL 7.  I had to use not only the official installation guide from Cachet, but several other guides provided by people who have successfully installed it on Ubuntu and Centos. Here for reference are the guides I used to install Cachet:

Install and Configure MariaDB

yum install mariadb
systemctl enable mariadb
systemctl start mariadb
#Secure the MariaDB installation and set root password
#Create cachet database
mysql -e "create database cachet"
#Create cachet user
mysql -e "create user 'cachet'@'localhost' identified by 'CACHET_USER_PASSWORD'"
mysql -e "grant all privileges on cachet.* to 'cachet'@'localhost'"
mysql -e "flush privileges"

Configure Additional Repositories

Enable ‘optional’ repository (rhel-7-server-optional-rpms) and the ‘extras’ repository (rhel-7-server-extras-rpms).

Do this by modifying the /etc/yum.repos.d/redhat.repo file

Look for the [rhel-7-server-optional-rpms] and [rhel-7-server-extras-rpms] sections

modify the enabled = 0 to enabled = 1 for both repositories

Install the Extra Packages for Enterprise Linux

Now install EPEL by running

yum install epel-release

Install the Remi Repository for PHP packages that are not available  with the default system repositories.

rpm -Uvh

Install the Cachet package dependencies

yum --enablerepo=remi,remi-php56 install php-fpm php-common php-mcrypt php-mbstring php-apcu php-xml php-pdo php-intl php-mysql php-cli php-gd git

PHP configuration

#verify installed php version(s)
rpm -qa | grep -i php
#Enable php-fpm to run at startup
systemctl enable php-fpm
#Start php-fpm
systemctl start php-fpm

By default php-fpm will listen to The listen directive “” will be used in the apache virtualhost config detailed below.  If you want to change this default port, modify the following file:

vi /etc/php-fpm.d/www.conf

Install Composer

curl -sS | php -- --install-dir=/bin --filename=composer

Install Cachet

Clone the Cachet Repository

cd /var/www/
git clone
cd Cachet/
git tag -l
git checkout v2.3.9
cp -v .env.example .env

Modify the Cachet configuration file to look like the following

[user@myserver Cachet]# cat .env




Set apache user:group permissions for Cachet

chown -R apache:apache /var/www/Cachet/

Compose the site

cd /var/www/Cachet
composer install --no-dev -o

Generate a Application Key used for encryption

php artisan key:generate

Run the installer that seeds the database

php artisan app:install

Configuring Apache

Install and configure Apache service

yum install httpd
systemctl enable httpd
systemctl start httpd

Create a virtualhost configuration file called vhost.conf within the /etc/httpd/conf.d/ directory. It should look like the following:

[user@myserver Cachet]# cat /etc/httpd/conf.d/vhost.conf
<VirtualHost *:80>
    DocumentRoot "/var/www/Cachet/public"
    <Directory "/var/www/Cachet/public">
        Require all granted
        Options Indexes FollowSymLinks
        AllowOverride All
        Order allow,deny
        Allow from all
<FilesMatch \.php$>
            SetHandler "proxy:fcgi://"

Notice the <FilesMatch \.php$> directive. This was required in order to get php working within apache. Otherwise, you will get an error page when navigating to the cachet dashboard page.

I also modified the /etc/httpd/conf/httpd.conf file and included the DirectoryIndex index.php option.

<IfModule dir_module>
    DirectoryIndex index.html index.php

Test Cachet Status Page

navigate to the following address in your browser:

You should be redirected to the /setup page


Other useful links


Install PHP and Apache on Red Hat


Posted in Cachet, Linux, PHP, RHEL7 | 3 Comments

Duo Authentication Proxy Configuration

The DUO authentication proxy is a quick and easy way for a business to start to test 2FA with certain important applications. A lot of software doesn’t have 2FA built in, but does offer some LDAP user/group authentication. Software such as HPE Oneview, HPE c7000/3000 chassis, HPE SSMC, Graylog, etc offer only simple ldap authentication. The use of the DUO authentication proxy enables all of these applications to utilize 2FA for authentication if the application offers you the ability to increase the ldap timeout to 60 seconds. Some applications allow you to modify their ldap timeout value, but others do not.  Graylog for example does allow you to modify the ldap timeout value, but the new version works with DUO using the default configuration.

The duo authentication proxy for the most part is pretty easy to setup. However, as soon as you start adding certificates, troubleshooting issues start to become a problem.  The good thing is that general troubleshooting is not that difficult if you enable debug logging within the authproxy.cfg file.

  1. Create a free DUO account by going to:
  2. Install the latest version of the authentication proxy by using the following guide:

Use Case Scenario: LDAPS Authentication Proxy

I currently have a load balanced ldaps service that points to each of my Active Directory Domain Controllers. All of my ldap applications point to this load balanced service for their configuration.I started testing the DUO auth proxy by running the service without certificates to start, to test the configuration, and then if certain applications worked with 2FA, certificates would then be added later.  A sample configuration of my DUO auth ldap proxy service is as follows:

The Duo Authentication Proxy configuration file is named authproxy.cfg, and located in the ‘conf’ subdirectory of the proxy installation.




Now you may notice a few things:

  1. I encrypted the service_account_password
  2. I have included my ssl_ca_certs_file for the ad_client section
  3. the ikey and skey keys are generated when you add the DUO ldap auth proxy application within the DUO admin interface.

In order to get the ssl_ca_certs file working properly, I had to add the entire certificate chain the my .cer file. The certificates I use contains 2 intermediate and 1 root ca certificate. I simply copied the entire certificate chain and pasted the chain above my public certificate in top down order. Before figuring out this certificate chain problem, the duo auth proxy service would fail to start.

Once I tested a few applications with my new DUO auth proxy, I decided to go ahead and add my own certificate to the Duo Auth Proxy and enable ssl connections.  My configuration is below.

ldaps configuration




In order to get the ssl working properly, I had to add the entire certificate chain the my .cer file. The certificates I use contains 2 intermediate and 1 root ca certificate. I simply copied the entire certificate chain and pasted the chain below my public certificate. It might have been possible to add these certificates to the default http_ca_certs_file.Before figuring out this certificate chain problem, the duo auth proxy service would fail to start.

Good luck configuring your authentication proxy service, and I hope this helps anyone who might need help in the configuration.

Posted in Duo 2FA, ldaps | Leave a comment

Office 365 / Exchange Hybrid Security Issue

I have recently determined a major security issue related to deleted mailboxes in an Office 365 Hybrid environment. Our former decommission process for accounts was to allow Office 365 to delete the mailboxes of “Former” users no longer at the company.  We simply move the AD account to an OU that is not currently being synced with Office 365 with the Azure AD connect program.  This marks the mailbox and account for deletion within Office 365.

In the past I noticed that the On-Premise Exchange servers still showed these users had “Office 365” as their Mailbox Type.


I also have not configured Azure AD connect for any writeback functionality.

With some recent AD account compromises, I have noticed that these accounts were somehow sending Spam email to outbound email addresses. How could this happen? This mailbox had been deleted by Office 365, there is no way that you can send mail without an actual mailbox, right? Wrong! If you look at your On-Premise receive connectors’ security settings, you will see the following: Permission Groups -> Exchange users


This means that the On-Premise Exchange Servers allows these accounts that have been removed from Office 365 to connect and send email!! Even though they technically don’t have a mailbox on-premise or in Office 365. So it is now imperative to disable the remote mailboxes on your On-premise exchange servers. I fortunately have these AD accounts in a particular OU within AD, so I could execute the following command to immediately disable all of these mailboxes.

Get-RemoteMailbox -ResultSize Unlimited -OnPremisesOrganizationalUnit “OU=someOU,OU=anotherOU,DC=contoso,DC=com” | Disable-RemoteMailbox

I hope this helps anyone before it becomes an issue within your organization. Cheers!

Posted in Exchange 2013, Office 365 | Leave a comment

New Active Directory Replication Status Tool

It’s that time of the year for another Exchange 2013 CU upgrade.  Usually the first part of the upgrade process is to check to see if you need to extend the AD schema etc. Before I extend the schema I always check AD replication using the command line tools. This time however, I found a really awesome GUI tool that simplifies this process.

Active Directory Replication Status Tool

Run the tool and refresh replication status before and after each step in the process.  The tool also has really cool error guide that links directly to Microsoft support articles for replication issues.  Enjoy!




Posted in Active Directory | Leave a comment

RHEL7 extend a LVM managed XFS File System

Most virtualization/linux administrators will have to continually expand certain virtual disks from time to time to increase free space.  Here is my simple step-by-step procedure to expand these volumes.  This process assumes that the volume you want to expand is contained within a LVM, and the file system used is XFS (New default file system for RHEL 7).

#SSM currently does not the +100%FREE option. So we use a combination of ssm and lvm commands.  System Storage Manager commands are available when you run the following command:

yum install system-storage-manager

Add new space to virtual machine drive.  If using VMWare, open vCenter and locate the desired hard drive. Expand the drive to size needed.

ssm list //list volumes, devices, pools…

partprobe //informs the operating system kernel of partition table changes, by requesting that the operating system re-read the partition table

pvdisplay  //find the physical device that you want to extend

pvresize <Device Path>

pvdisplay  //verify the pv has been extended, in my case I had increased it by 1 TB of space

LVM <LV Path> example = /dev/rhel_data_pool/rhel_data_volume

lvextend -L +100%FREE <LV Path>


ssm list //verify ssm can now see volume size increase

#Now we extend the file system

xfs_info <LV Path>  //write down blocks
xfs_growfs <LV Path> // Grow XFS file system to the largest possible size
xfs_info <LV Path>  //verify block number increases

ssm list //verify FS size matches Volume size
df -h //verify FS size

Posted in Linux, RHEL7 | Leave a comment

Office 365 Mailbox Migration – RemoteRoutingAddress Issues

I recently started an Office 365 Exchange migration batch job with several thousand mailboxes.  The migration of the mailboxes was working just fine, but we heard  reports of bounced messages to a few migrated mailboxes.

Remote Server returned ‘554 5.4.6 Too many hops’

After troubleshooting this problem for a few hours, I determined that the problem was with an incorrect setting of the RemoteRoutingAddress.  Our Exchange 2013 Email Address Policy is pretty simple:

Email Address Policy

  • Primary:
  • Address 2:
Address 2 was added to our policy when the Office365, Exchange Hybrid wizard was successfully run for the first time. Now my current problem is several migrated mailboxes had the wrong RemoteRoutingAddress.
For some reason, after migrating this mailbox to Exchange Online, the incorrect RemoteRoutingAddress was set on the mailbox. Simply selecting the correct address of: in the drop down list resolved the issue. Now I am curious, how many other mailboxes had the exact same problem, and why?  I created the following powershell script to search all remote mailboxes for this same problem.

Get-RemoteMailbox -ResultSize Unlimited | Where-Object {$_.RemoteRoutingAddress -notlike “*”}

This script will search all remote mailboxes that have this problem. Now the question is why is this happening? Since I don’t have the address in my email address policy, why does the mailbox migration add this smtp address and set it as the remoteroutingaddress?  At this point I am not sure, but I am looking into it. If anyone knows why this is happening, or has a more permanent solution let us know. Thanks!

Posted in Exchange 2013, Office 365, Uncategorized | 3 Comments