External monitor auto-configuration in GalliumOS (Ubuntu/Xfce)

03 Dec 2020

I bought a 4k monitor recently for a multitude of reasons, tired of doing graphic design on a 15” laptop screen, working from home more and more because of the pandemic, and becuase I just wanted a new toy to play with. My personal laptop is an Acer C910 running GalliumOS instead of ChromeOS. GalliumOS has been great overall, built on Xubuntu for ChromeOS devices. Plugging a new monitor in has never been an issue, always immediately recognized but needed to perform a few manual steps to get the resolution & placement to work. It always wanted to default to 2560x1440 instead of 3840x2160 and I always needed to tell the display manager if I wanted to extend or mirror my desktop.

Wanting this to be more of an automatic config when I plugged the monitor into the HDMI port I turned to the internet and found a great post by Ben Fedidat. Ben’s post got me 90% there but I still had some minor changes I needed to add which is why Im making this post.

Here are the issue I encountered and needed to address:

  • Have the laptop recognize and configure the monitor everytime I plugged it in.
  • Move all active windows from external screen to laptop screen when unplugging external monitor (for proper hibernation when I eventually closed the lid).
  • When in clamshell mode and external monitor is unplugged hibernate the laptop.

Ben’s post got the monitor to be recognized ~70% when plugging the monitor in, there were times when xrandr recognized the monitor but did not see it’s available resolutions, I believe this is because the monitor was in powersave mode and cannot fully communicate with the kernel yet. There were more issues when I unplugged the monitor, mainly not going into hibernation or unextending the display screen. Because this is a laptop, when running in clamshell mode the laptop would not go to sleep because even though I had unplugged my monitor all the applications where still ‘active’ on the second screen along with the mouse cursor and therefore the laptop would not hibernate (and shortly after kill the battery).

Things of note

  • Lines 10-15 I use xdotool to find all visible windows and move them to my laptop screen
  • Line 17 I move my mouse cursor to the laptop screen
  • Line 18 I sleep for 1 second to let everything finish, Im not sure if this is necessary but seems to help
  • Line 20 I let xrandr auto configure the screens to what should only be the laptop screen at this point.
  • Lines 25-38 When the external monitor is connect I check xrandr to determine if it correctly recognizes it, if so on line 39-42 I configure both screens to the resolution I want and position them accordingly.

Everything else pretty much matches Ben’s guide mentioned above.

Script

date >> log.txt #log the date for debug purposes
sleep 1 #give the system time to setup the connection

# CHANGE THE 'HDMI-A-1' IN THE FOLLOWING LINE FOR YOUR SYSTEM
dmode="$(cat /sys/class/drm/card0-HDMI-A-1/status)"
export DISPLAY=:0
export XAUTHORITY=/home/MarkOffv/.Xauthority

if [ "${dmode}" = disconnected ]; then
	for window_id in $(xdotool search --onlyvisible ".*")
	do
        # THIS COMMAND IS MAPPED TO move-to-next-monitor.sh (https://github.com/jc00ke/move-to-next-monitor) 
        # SHOULD MOVE ALL OPEN APPLICATIONS TO LAPTOP SCREEN
		xdotool key --window $window_id Shift+Alt+Right  
	done
    # MOVE MOUSE CURSOR TO LAPTOP SCREEN
	xdotool mousemove --screen 0 10 10 
	sleep 1
	
	/usr/bin/xrandr --auto
    echo "disconnected" >> log.txt
elif [ "${dmode}" = connected ]; then	
	# LOOP IN CASE MONITOR IS COMING OUT OF POWERSAVE AND XRANDR DOESNT HAVE WHAT IT NEEDS YET
    # GIVE THE LAPTOP A CHANCE TO SEE THE RESOLUTIONS OF THE CONNECTED MONITOR
    n=0
	while [ $n -le 5 ] 
	do
		a=`xrandr | grep -c "3840"`
		if [ $a -gt 0 ]
		then
			break
	    fi
	    sleep .2		
        # LOGS NUMBER OF ATTEMPTS
		echo "Attempt ${n} resize failed" >> log.txt 	
        # INCREMENTS $n
	    n=$(( n+1 ))	 
	done
	
    if xrandr -q | grep -q "3840"; then #THE RESOLUTION SHOULD APPEAR
        #THIS IS THE XRANDR COMMAND, OUTPUT PIPED TO THE LOG
        /usr/bin/xrandr --output HDMI1 --mode 3840x2160 --pos 1920x0 --rotate normal --output VIRTUAL1 --off \
            --output eDP1 --mode 1920x1080 --pos 0x0 --rotate normal 2>&1 | tee -a log.txt
        echo "success" >> log.txt
    else
		xrandr >> log.txt
        echo "no resolution, bad setup" >> log.txt
    fi
else
    echo "unknown event" >> log.txt
fi

Issues I still have are as follows:

  • Sometimes minimized applications still stay active on external monitor screen and this laptop wont hibernate.
  • Occasionally the mouse cursor doesnt move and again the laptop won’t hibernate.
  • Terminal windows sometimes do not like to move moved with xdotool, I believe this is because the terminal takes the commands as if they were being entered instead of being applied to the window itself.

This script gets me 95% of the way there, is it perfect? No? Will other OS’s handle this better? Yes. Did I like doing this? Absolutely. Sometimes linux works 100% out of the box without need for intervention, but sometime it helps you learn how it works and makes you think a little. I like that.



Creating a dummy windows service in C

13 Jan 2015

I have a load balanced web application that is hosted in a managed datacenter in another state. Because the data center is managed by a third party I am not allowed to have direct access to the load balancer for SLA purposes but I still need an ability to pull a server from the load balancer pool at will. As a work-around I want to create a Windows service that the load balancer can monitor and as long as the service is running the server should remain in the pool, if service is stopped the load balancer should pull that server from the pool and route users to a server still considered operational. The trick to this is, the service essentially does nothing but run and is something I can manually start and stop at my own discretion.

Having never created a Windows service I had to do a quite a bit of research to determine the proper way to get this going. In the end I decided on using C as the language for the service, using this article as my main source of information about accessing the Windows API. All this service does is start, then loop a sleep pause every 5 minutes and that’s it, very simple and serves the purpose.

#include <windows.h>
#include <stdio.h>

#define LOGFILE "C:\\Temp\\sleepstatus.txt"

SERVICE_STATUS ServiceStatus; 
SERVICE_STATUS_HANDLE hStatus; 
 
void  ServiceMain(int argc, char** argv); 
void  ControlHandler(DWORD request); 
int InitService();

int WriteToLog(char* str)
{
	FILE* log;
	log = fopen(LOGFILE, "a+");
	if (log == NULL)
		return -1;
	fprintf(log, "%s\n", str);
	fclose(log);
	return 0;
}

void main() 
{ 
    SERVICE_TABLE_ENTRY ServiceTable[2];
    ServiceTable[0].lpServiceName = "SleepService";
    ServiceTable[0].lpServiceProc = (LPSERVICE_MAIN_FUNCTION)ServiceMain;

    ServiceTable[1].lpServiceName = NULL;
    ServiceTable[1].lpServiceProc = NULL;
    // Start the control dispatcher thread for our service
    StartServiceCtrlDispatcher(ServiceTable);  
}

void ServiceMain(int argc, char** argv) 
{ 
    int error; 
 
    ServiceStatus.dwServiceType        = SERVICE_WIN32; 
    ServiceStatus.dwCurrentState       = SERVICE_START_PENDING; 
    ServiceStatus.dwControlsAccepted   = SERVICE_ACCEPT_STOP | SERVICE_ACCEPT_SHUTDOWN;
    ServiceStatus.dwWin32ExitCode      = 0; 
    ServiceStatus.dwServiceSpecificExitCode = 0; 
    ServiceStatus.dwCheckPoint         = 0; 
    ServiceStatus.dwWaitHint           = 0; 
 
    hStatus = RegisterServiceCtrlHandler(
		"SleepService", 
		(LPHANDLER_FUNCTION)ControlHandler); 
    if (hStatus == (SERVICE_STATUS_HANDLE)0) 
    { 
        // Registering Control Handler failed
        return; 
    }  
    // Initialize Service 
    error = InitService(); 
    if (error) 
    {
		// Initialization failed
        ServiceStatus.dwCurrentState       = SERVICE_STOPPED; 
        ServiceStatus.dwWin32ExitCode      = -1; 
        SetServiceStatus(hStatus, &ServiceStatus); 
        return; 
    } 
    // We report the running status to SCM. 
    ServiceStatus.dwCurrentState = SERVICE_RUNNING; 
    SetServiceStatus (hStatus, &ServiceStatus);
 
    // The worker loop of a service
    while (ServiceStatus.dwCurrentState == SERVICE_RUNNING)
	{
		int result;
    
	    /* Do nothing but loop once every 5 minutes */
		while(1)
		{
			sleep(300);			
		}
		
	}
    return; 
}
 
// Service initialization
int InitService() 
{ 
    int result;
    result = WriteToLog("Monitoring started.");
    return(result); 
} 


// Control handler function
void ControlHandler(DWORD request) 
{ 
    switch(request) 
    { 
        case SERVICE_CONTROL_STOP: 
             WriteToLog("Monitoring stopped.");

            ServiceStatus.dwWin32ExitCode = 0; 
            ServiceStatus.dwCurrentState  = SERVICE_STOPPED; 
            SetServiceStatus (hStatus, &ServiceStatus);
            return; 
 
        case SERVICE_CONTROL_SHUTDOWN: 
            WriteToLog("Monitoring stopped.");

            ServiceStatus.dwWin32ExitCode = 0; 
            ServiceStatus.dwCurrentState  = SERVICE_STOPPED; 
            SetServiceStatus (hStatus, &ServiceStatus);
            return; 
        
        default:
            break;
    }  
    
    // Report current status
    SetServiceStatus (hStatus,  &ServiceStatus);
 
    return; 
} 	

Now that the C file has been created, I compile this into an EXE using MinGW:

C:\MinGW\bin\gcc sleepservice.c -o sleepservice.exe

At this point I create the C:\Temp\ folder on the server in question and copy the EXE to the newly created directory. To create the service on the server, open a command prompt and enter the following.

C:\sc create SleepService binPath= "C:\Temp\sleepservice.exe" DisplayName= "SleepService " start= auto
[SC] CreateService SUCCESS

At this point you are done, you should be able to see your newly created Windows service and can start it and stop it at will. When started you should see a process called sleepservice.exe running in the Windows Task Manager.

New Windows Service



Grays Peak

25 Jul 2014

Summited Grays Peak again this year, took me 3 hrs up and 1 hr 45 min back down.

  • Summit Elev.: 14,270 feet
  • Trailhead Elev.: 11,280 feet
  • Elevation Gain: 3,600 feet
  • Roundtrip Distance: 7.50 miles

Click the images below for the full size images with more detail.

About 1 mile in, the red arrow is where Im going
At the start
Snow field on the way up
Snowfield
At the top, the arrow on the right is where I started, left arrow is the where I crested the ridge and followed it to the top.
View from the top



Utah is Beautiful

30 Jun 2014

My poker group decided to have a poker weekend in Vegas this summer and instead of flying I choose to make it a roadtrip. I have visited Utah before but never this route and the scenery was amazing. After making this drive I’m going to need to plan a camping trip out that way in the near future. Simply stunning.

Amazing



Page Loading Performance Gains

01 May 2014

Turns out having around 100 <iframe>’s that pull data from YouTube or Soundcloud has an adverse effect on a pages load time. As my music page on this website grew in size I was starting to see load times in the 10+ second range, bad usability experience everytime I visited. Never having embedded this type of content before it provided me a good opportunity to find a way to speed things up.

The original design was just to plop the <iframe> into a <div> and hide it with jQuery until a user wanted to view that video. Even though the video was hidden, it was still loaded from it’s source when the page was rendered. This is a fine practice if you have one or two <iframe>’s embedded on a page but becomes problematic when that number grows because the browser needs to go out get pull the web content from wherever the <iframe> is pointed to.

After searching for a method to not load <div> content until it is toggled into view I discovered it was possible to use a somewhat ‘madeup/undefined’ tag called data-src. Using this tag allows me to populate the entire page but not actually load the content until the users selects/toggles a video to view. When that selection is made I use jQuery to toggle the <div> in view and change the data-src attribute to a valid src atrribute and load the content.

This one change resulted in a 90% decrease in the page load time. Check out the load time difference using Pingdom Tools website speed test

Before After
Before Page Load Times After Page Load Times

In addition to changing how the <iframe>’s are loaded I also changed the jQuery I was using to show and hide the <divs>. John Snyder’s collapsible plug-in was great, but I wanted to get a better grasp of jQuery and rolled my own. It’s minor stuff but it’s not dependent on any outside libraries now, which is nice. Also implemented some CSS sprites for the plus/minus images. This image is so small the performance gain is probably nil, but I wanted to learn how to do it.

All the changes have been merged into the [GitHub repository] ( https://github.com/ProfessionalAmateur/Reddit-Liked-List-Builder). Here a summary of the new jQuery used on the music page.




Are Splash Screens Back?

06 Jan 2014

I thought the internet as a whole gave up on these back in 2007. The magic landing page that tells your visitors you’ve arrived some where great but need to make a decision to continue you on with your adventure by clicking through. It didn’t take long for site owners and usability experts to realize people hated them [1] and often lost visitors because of them.

Even with the entire population of the internet in agreement I find myself running into these ‘landing’ pages more often these days.

Call it what you want, its a splash page Maybe I'll be done with the scroll before the next harvest

A massive full screen image with a subject line or sales intro/pitch that needs to be scrolled through to get to any interesting content. The only difference between this and 2007 is a user does not need to click through to content but instead has to scroll for what feels like a country mile to proceed. Do people find these types of sites appealing aside from the U/X designer? Scrolling is already a somewhat broken feature in today’s tablet/smartphone ecosystem, I think these layouts make the user experience worse. Even Google choose this type of format to market their Android 4.4 KitKat release.

I do love me some Android This should be shown on a billboard in Times Square

It should be stated I’m not picking on Evgeny Tchebotarev or his blog specifically, I have come across this web layout trend enough times over the past few months I decided to write about it. Layouts like these are distracting, cumbersome and destroy the natural flow of reading a webpage, the technical aspects of it is extremely cool, in terms of usability it makes my teeth itch. I’m a little surprised the internet decided to return to its jean jacket roots.



Rendering Difference Between Firefox and Chrome/Internet Explorer

05 Nov 2013

I’ve discovered that this blog does not display the same between Firefox and Chrome and Internet Explorer. For some reason with Chrome and IE the <div>’s on the music page stagger their layout. I’m primarily a Mozilla Firefox user and performed all my testing with that browser. I’m looking into how to fix this and will post an update shortly with my solution.

Here is how it looks in Firefox:

Perfectly Normal Firefox Rendering

And the not-so-good-looking Chrome:

Blargh

Staggering I know…..

EDIT

So the issue appears to be a padding value within the CSS for the <span> that contain the images for the plus and minus images. Taking it from a value of padding:10px to padding:9px 10px 10px 10px; resolved the issue. I not sure if my line height is causing the issue or if the iamges are too big for a span with that much padding…. Very odd.



Embedding Youtube and Soundcloud Content Nicely with jQuery

09 Oct 2013

The first incarnation of my music page contained a simple list of plain <a href> links to site hosting the content, usually Youtube, Soundcloud or Grooveshark. While this did meet my most basic requirements of maintaining a list of songs I like, having to leave the page to view the content, only to hit my back button to return and select the next song was clunky to me. To solve this I wanted to embed the video content right onto the page, at the same time still leave the ‘list like’ format for conciseness.

Creating the embedable <iframe> for Youtube was fairly straight forward. Grab the url, parse out the ‘v’ parameter, then copy paste Google’s iframe syntax populated with the parameter. Boom, embedable glorious Youtube content.

Soundcloud’s embedable content was a bit more difficult because the URL they use in the <iframe> is not a straight derivative of the URL a person would use to view content on their website. Thankfully Soundcloud has a great API and with a little python magic I could fetch the whole <iframe> source straight from web service call.

Now in possession of embedable content from my favorite providers I needed a way to display this content neatly. I decided to search for a way to create a collapsible <div>. Something that would allow me to still see the artist and song in list form, but additionally allow me to view the content without leaving the page. John Snyder created an awesome jQuery plugin that does exactly that.

This was my first foray with jQuery so I’m not entirely sure how this magic works, perusing the source I could vagely follow what was happening, but I do not have enough knowledge yet of javascript or the DOM to fully understand the plugin. Regardless, with a little effort I managed to get the desired results as demonstrated below (click the plus sign, it’s awesome):

This is a rough example of how I retrieve the Soundcloud <iframe> HTML. You can see how the URL used to view content on the Soundcloud webpage is ‘url’ parameter in the API call. They offer several output formats, I choose JSON since I was already using it to parse the Reddit data coming back from their API

import urllib.request, urllib.error, urllib.parse
import json

def fetch_soundcloud_embed_html():
    url = 'http://soundcloud.com/oembed?format=json&url=https://soundcloud.com/mat_zo/the-mat-zo-mix-004-21-09-13&iframe=true'
    opener = urllib.request.build_opener()
    try:
        url = urllib.request.Request(url)
        r = opener.open(url).read()
        output = r.decode('utf-8')
        html = json.loads(output)["html"]

    return html

    except Exception as e:
        logging.error(e)



Github, Python, Reddit and more

23 Sep 2013

Ever find yourself wondering what the name of that song was you listened to a week ago on YouTube? This happens to me all the time, I browse the /r/electronicmusic subreddit almost everyday to find new music to listen to. I would upvote songs and then find myself a week later attempting to look through my history for songs I liked but it was always a hassle to weed through all my upvoted content just to find songs to listen to.

Knowing Reddit offered an API I thought there was probably a way I could automate reading through my history and listing all the songs I liked saving myself the hours of manual labor (I am very lazy…err, I mean efficient).

I wanted to accomplish three things with this mini project:

  • Write a python script
  • Get myself on GitHub
  • A consistently updated list of good music to listen to

Overview

This python script will call Reddit’s API, parse a users ‘liked’ history for a given subreddit and output a list of HTML links to a file.

Usage

  1. Download the redditlikedlist.py and rll.cfg to a directory of your choice.
  2. Populate the following fields in the rll.cfg file:
  3. Execute the python script

    $> python redditlikedlist.py rll.cfg

The first time the script is run it will start with your most recent liked links and work backwards into the past. Reddit will allow you to parse your most recent 1000 links. After the first run the script stores the most recent link in the ‘beforelinkname’ in the configuration file. Subsequent executions the script will start parsing from this link and work forward to present time

Below I will post the most current version of this script, however please see the [GitHub repository] ( https://github.com/ProfessionalAmateur/Reddit-Liked-List-Builder) for the most up-to-date version.

Configuration File

[PATHS]  
outputfile = (The destination of the output file. Example: /home/John/musiclist.html) 
subreddit = (The subreddit you want to parse from. Example: electronicmusic) 

[WAYPOINT] 
beforelinkname = (leave blank, the script will populate) 

[CREDENTIALS] 
username = (Your Reddit username, must have a valid Reddit account to use this script.) 
password = (Your Reddit password, must have a valid Reddit account to use this script.) 
useragent = (Reddit requires a unique user agent for all calls to its API, 
             it is recommended you incorporate your username in the agent. Example: BobaFett37's Liked List Parse)

Python Script

Python 3+ is required to run this script. I have another mostly working version on GitHub for Python 2.6.6

# This script will login to Reddit, return all liked stories for a given user
# parse all the subreddit likes and build and output for 
# a website listing.
#
import time
import datetime
import urllib.request, urllib.parse, urllib.error
import urllib.request, urllib.error, urllib.parse
import http.cookiejar
import json
import configparser
import logging
import tempfile
import os
import argparse

# Variables
hdr = {}
before_flag = False
link_value = '' 
liked_url =  'https://ssl.reddit.com/user/<username>/liked.json?limit=100&<direction>=<link_name>'
cj = http.cookiejar.CookieJar()
cfg_file = ''
final_file_location = ''
username = ''
password = ''
subreddit = ''
iCounter = 0
tmpfile = tempfile.TemporaryFile()
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))

# Parse input
def parse_input_params():
    global cfg_file
    parser = argparse.ArgumentParser(description='Music list built from liked stories within a subreddit')
    parser.add_argument("config_file", help="Configuration file location")
    args = parser.parse_args()
    cfg_file = args.config_file

# Load config file
def get_config():
    config = configparser.SafeConfigParser()
    config.optionxform(str())
    try:
        config.read(cfg_file)
        return config
    except Exception as e:
        logging.error(e)

# Retrieve values from config file
def load_config_values(config):
    try:
        global final_file_location
        global username
        global password
        global subreddit
        global hdr
        global before_flag
        global link_value
        final_file_location = config.get('PATHS','outputFile')
        subreddit = config.get('PATHS','subreddit')
        link_value = config.get('WAYPOINT','beforeLinkName')
        if link_value:
            before_flag = True
        username = config.get('CREDENTIALS','username')
        password = config.get('CREDENTIALS','password')
        hdr['User-Agent'] = config.get('CREDENTIALS','useragent')

    except Exception as e:
        logging.error(e)

# Reddit Login Function
def login(username, passwd):
    values = {'user': username,
              'api_type': 'json',
              'passwd': passwd}

    login_url = urllib.request.Request('https://ssl.reddit.com/api/login/', headers=hdr)
    data = urllib.parse.urlencode(values)
    data_bytes = data.encode('utf-8')
    try:
        response = opener.open(login_url, data_bytes).read()
    except Exception as e:
        logging.error(e)        
        
def process_reddit_data():
    global link_value
    global tmpfile
    global iCounter
    try:
        while (link_value is not None):
            time.sleep(3)
            liked_json = retrieve_liked(username)

            if (before_flag == False):
                link_value = json.loads(liked_json)["data"]["after"]
            else:
                link_value = json.loads(liked_json)["data"]["before"]

            liked_json = json.loads(liked_json)["data"]["children"]
            for titles in liked_json:
                iCounter += 1
                if (iCounter == 1):
                    write_config_values(titles["data"]["name"])

                if(titles["data"]["subreddit"]==subreddit and titles["data"]["media"] is not None):
                    tmpfile.write(bytes('<a href=\''+ titles["data"]["url"] + '\'>' + titles["data"]["title"] + '</a><br/>\n', 'utf-8' ))

    except Exception as e:
        logging.error(e)
        
# Fetch liked content for a user
def retrieve_liked(username):
    try:
        if(before_flag == True):
            direction = 'before'
        else:
            direction = 'after'
        repl = {'<username>':username, '<link_name>':link_value, '<direction>':direction}
        url = replace_all(liked_url, repl)
        url =  urllib.request.Request(url, headers=hdr)
        r = opener.open(url).read()
        response = r.decode('utf-8')
        return response
    except Exception as e:
        logging.error(e)

# Write/Update config file
def write_config_values(before_link):
    try:
        configVal.set('WAYPOINT', 'beforeLinkName', before_link)
        f = open(cfg_file, 'w')
        configVal.write(f)
        f.close
    except Exception as e:
        logging.error(e)

def updated_timestamp():
    ts = time.time()
    st = datetime.datetime.fromtimestamp(ts).strftime('%m-%d-%Y %H:%M:%S')
    return("Last updated at: " + st + "<br/><br/>\n")
        
def write_output():
    global tmpfile
    try:
        if os.path.exists(final_file_location):
            #final output file aleady exists, we need to append new data.
            f2 = open(final_file_location, 'r')
            for i in range(1):
                next(f2)
            for line in f2:
                tmpfile.write(bytes(line, 'utf-8'))
            f2.close()

        tmpfile.seek(0)
        f = open(final_file_location, 'wb')
        f.write(bytes(updated_timestamp(), 'utf-8'))
        for line in tmpfile:
            f.write(bytes(line))
        f.close()

        tmpfile.close()
    except Exception as e:
        logging.error(e)

# generic replace text using dict function
def replace_all(text, dic):
    for i, j in dic.items():
        text = text.replace(i, j)
    return text


###########################################################
# Main Processing
###########################################################
parse_input_params()
configVal = get_config()
load_config_values(configVal)

# Call login and retrieve liked content.  Each call must separated by at least 2 seconds.
login(username, password)
process_reddit_data()
write_output()



Parse Excel Documents Quickly with Powershell

06 Sep 2013

Using Powershell to manipulate Microsoft Excel documents can be an extremely handy and helpful tool. Setting out to interface an Excel document into EnterpriseOne I found this MSDN blog post by Sriram Reddy to help with my initial PS script. Sriram’s method is to use a COM interface into Excel to gain access to its content. This method essentially starts up Excel, in the background or foreground, and gives you access almost as if you were using the application directly. The problem I soon discovered was this method proves to be excessively slow, I was seeing performance in the area of .6 seconds per row read and I was only reading around 1000 rows of data. In search of better performance I found a better way to access Excel via Powerhsell using OLEDB that is extemely efficient and easy to use.

Here is a snippet of the sample data I used for this test.
Sample Data

My first script shown below ran very slow.

########################################################################################################
# This is a common function I am using which will release excel objects
########################################################################################################
function Release-Ref ($ref) {
    ([System.Runtime.InteropServices.Marshal]::ReleaseComObject([System.__ComObject]$ref) -gt 0)
    [System.GC]::Collect()
    [System.GC]::WaitForPendingFinalizers()
}

########################################################################################################
# Creating excel object
########################################################################################################
$objExcel = new-object -comobject excel.application 

# Set to false to not open the app on screen.
$objExcel.Visible = $False

########################################################################################################
# Directory location where we have our excel files
########################################################################################################
$ExcelFilesLocation = "C:/TestExcelData/" + $args[0]

########################################################################################################
# Open our excel file
########################################################################################################
$UserWorkBook = $objExcel.Workbooks.Open($ExcelFilesLocation) 

########################################################################################################
# Here Item(1) refers to sheet 1 of of the workbook. If we want to access sheet 10, 
# we have to modify the code to Item(10)
########################################################################################################
$UserWorksheet = $UserWorkBook.Worksheets.Item(2)

########################################################################################################
# This is counter which will help to iterrate trough the loop. This is simply a row counter
# I am starting row count as 2, because the first row in my case is header. 
# So we dont need to read the header data
########################################################################################################
$intRow = 2

$a = Get-Date
write-host $a
write-host "---------------------------"

Do {

    # Reading the first column of the current row
    $TicketNumber = $UserWorksheet.Cells.Item($intRow, 1).Value()
    write-host $intRow " " $TicketNumber    
    $intRow++

} While ($UserWorksheet.Cells.Item($intRow,1).Value() -ne $null)

$a = Get-Date
write-host $a
write-host "---------------------------"

########################################################################################################
# Exiting the excel object
########################################################################################################
$objExcel.Quit()

########################################################################################################
#Release all the objects used above
########################################################################################################
$a = Release-Ref($UserWorksheet)
$a = Release-Ref($UserWorkBook) 
$a = Release-Ref($objExcel)

Adding the timestamps showed just how slow this method was at accessing Excel data, running right around .6 seconds per row. That translates to my users staring patiently at their screen for 1 minute every 120 rows of excel data they are trying to process. I don’t know about your users, but my mine will definitely not be kind to me with that kind of performance, nor should they be.


PS C:\Powershell\ExcelInterfaceTest> .\ExcelCOMInterface.ps1 test.xlsx
9/6/2013 4:46:01 PM
---------------------------
2   078110
3   078108
4   078107
<SNIP>
244   078347
245   078350
9/6/2013 4:48:33 PM
---------------------------
PS C:\Powershell\ExcelInterfaceTest>

End users were going to be the final destination for this interface; having performance hover in the area of several minutes to process the excel data was not going to pass muster. Asking on StackOverflow was I directed to using an OLEDB to access Excel data from Poweshell and even uses SQL syntax to do so!

Here is the much faster new script using the OLEDB to accomplish the same task above. It should be noted using this method your first row of data must contain your column header information, what columns you ‘SELECT’ from in SQL terms.

########################################################################################################
# Directory location where we have our excel files
########################################################################################################
$ExcelFile = "C:/TestExcelData/" + $args[0]
$Sheetname = "Sheet2$"

########################################################################################################
$OleDbConn = New-Object .System.Data.OleDb.OleDbConnection.
$OleDbCmd = New-Object .System.Data.OleDb.OleDbCommand.
$OleDbAdapter = New-Object .System.Data.OleDb.OleDbDataAdapter.
$DataTable = New-Object .System.Data.DataTable.
$OleDbConn.ConnectionString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=`"$ExcelFile`";Extended Properties=`"Excel 12.0 Xml;HDR=YES`";"
$OleDbConn.Open()
$OleDbCmd.Connection = $OleDbConn
$OleDbCmd.commandtext = .Select * from [$Sheetname].
$OleDbAdapter.SelectCommand = $OleDbCmd
########################################################################################################

$RowsReturned = $OleDbAdapter.Fill($DataTable)
$intRow = 1
ForEach ($DataRec in $DataTable) {

    # Reading the first column of the current row
	$TicketNumber = $DataRec."ticket number"
    write-host $intRow " " $TicketNumber  
    $intRow++
}

While an OLEDB is still a COM object it is much faster than using excel.application COM, in addition you can access your data using SQL syntax increasing readability for maintaining the script. I didn’t add timings to judge performance with this script because it was a night and day difference, completion literally within 1-2 seconds; there was no doubt that this was the solution to use.



Dynamically switch Powershell to run in 64-bit mode

03 Sep 2013

Powershell is an amazing tool in the MS Windows world of computing, I explain to the unfamiliar that it’s like BASH for windows. You can accomplish pretty much anything in *nix using BASH scripting and the same is becoming true using Powershell for Windows. Want to query Active Directory for all startup scripts assigned to a user in an OU? Powershell can do it. Want to deploy an application to every PC in your remote office in Tunguska, Siberia? Powershell can do it. Want to verify a service is running on your helpdesk server and restart it not? Powershell can do it.

As with any new tool Powershell does have its own learning curve and quirks, getting a handle of the syntax and process behavior does take some time as you’ll see with this example.

I stumbled upon this Powershell oddity while trying to parse and import a Microsoft Excel document into JD Edwards EnterpriseOne table. To do this I was calling Powershell from a C program using the jdeSystem command (the JDE version of a C library ‘system’ call). I developed the application, the business function and the Powershell script on my development client, tested it and everything went swimmingly, of course all this on the first try too. ;) Next comes a build and deploy for a unit-test in our development environment JAS and application servers. This time when I run the test I get…nothing. No errors, no warnings and no expected data. I was baffled; it still runs perfectly on my dev client but not on the server.

The culprit a combination our 64-bit Microsoft Server 2008 R2 Enterprise Server, 64-bit MS Excel ODBC driver and our 32-bit compiled JDE software. All JD Edwards C functions are compiled as 32-bit programs so when you make a ‘system’ call to launch Powershell it will by default launch the 32-bit Powershell environment regardless of the 64-bit OS… and 32-bit Powershell just won’t play nice with a 64-bit ODBC driver.

After a lot of Google-fu I found a blog post describing a similar situation and derived my fix. You ask Powershell to verify its architecture and relaunch itself recursively using, in my case, the 64-bit environment.

This is another good explanation as to when the $env:PROCESSOR_ARCHITE6432 is set to either x86 or AMD64.

Here is a brief explanation of the script below:

  1. Script launches
  2. Checks the environment variable PROCESSOR_ARCHITEW6432 to determine if we are in 64 or 32 bit land.
  3. If in 32-bit land, relaunch script using 64-bit Powershell and exit. Otherwise complete the script normally.

This really saved me a lot of headache and I hope it helps someone else.

####################################################
# 32-bit-Test.ps1
# Created: 03/20/2013
# Author: me
# Summary: This script will test if Powershell is 
#          running in 32-bit mode and if true
#          launch itself in 64-bit mode for completion
####################################################

########################################################################################################
#If Powershell is running the 32-bit version on a 64-bit machine, we need to force powershell to run in
#64-bit mode to allow the OleDb access to function properly.
########################################################################################################
if ($env:PROCESSOR_ARCHITEW6432 -eq "AMD64") {
	write-warning "Y'arg Matey, we're off to 64-bit land....."
	if ($myInvocation.Line) {
		&"$env:WINDIR\sysnative\windowspowershell\v1.0\powershell.exe" -NonInteractive -NoProfile $myInvocation.Line
	}else{
		&"$env:WINDIR\sysnative\windowspowershell\v1.0\powershell.exe" -NonInteractive -NoProfile -file "$($myInvocation.InvocationName)" $args
	}
exit $lastexitcode
}


write-host "Main script body"

################
# END
################



First real blog update since migration

02 Sep 2013

So here we are, the first real post since the migration to Jekyll as a parsing engine. This makes updating and maintaining this blog much easier for me in comparison to trying to roll my own PHP and MySQL database. Creating a blog front and back-end on my own is fun for about 3 weeks and then I tend to lose interest, as for I’ve done the same thing probably three times now with the same results. It was also sort of a hassle to insert code snippets into the database using PHP when sanitizing my input. Using Jekyll everything is static and there is no back and forth to the server anymore.

I’d want to attempt this blog for a few reasons, I program for a career so I feel its’ helpful and important to stay current on technologies, being on the internet is fun, and blogging will allow/force me to dig a little deeper into things I post (I hope).

So that’s it. I have a few for powershell creations I’d like to post up and then I’ll try to find something to venture into. I’d like to get into mobile and work with Android some. that’s probably where I’ll go from here.

Oh, I’ll probably grab a real domain name first….(heh) Still thinking of a good name I’d like. Coding Soupcan is nice, but reminds me a little too much about Jeff Atwoods vastly successful blog Coding Horror… I’ll think of something I find quipy.



Moving to Jekyll

27 Aug 2013

In the interesting of making life easy for me, I will be moving this site from a PHP/MySQL architecture to one built using jeykll. Most of my content will be completely static and jeykll seems like a good tool to build new posts quickly. Stay tuned while I migrate the old content over.



Hello

26 Aug 2013

Hello World!



Verify a service is running with Powershell

24 Aug 2012

We use Spiceworks to manage our helpdesk and IT infrastructure. Frequently we would find that the ‘spiceworks’ service would be stopped for one reason or another on the server, which prevented our users from entering helpdesk tickets (bad!).

At this point in time we did not want to implement something like Nagios to ensure everything is up and running so I wrote a small Powershell script that will check the status of a service and if it’s not running send an email and attempt to restart the service. This gives IT a heads up there may be an issue with the server, and hopefully proactively get’s things running again without any intervention on our part.

We have this called via Windows Scheduler every 15 minutes. For execution, run the script and pass the service name like so: verify_service_status.ps1 [Service]

###################################################################
# verify_service_status.ps1
# Created: 09/14/2012
# Author: Me
# Summary: Verify a service is running, send email if it's down
###################################################################
#####################################
# Send Email Function
#####################################
 function sendMail($service, $serverNM){
 
     #SMTP server name
     $smtpServer = "XXX.XXX.XXX.XXX"
	 
     #Creating a Mail object
     $msg = new-object Net.Mail.MailMessage

     #Creating SMTP server object
     $smtp = new-object Net.Mail.SmtpClient($smtpServer)

     #Email structure
     $msg.From = "Server@example.com"
     $msg.ReplyTo = "ImportantPeople@example.com"
     $msg.To.Add("ImportantPeople@example.com"
	 $msg.To.Add("ImportantPeople@example.com"
     $msg.subject = 'Alert ' + $Service + ' is not running on ' + $serverNM
     $msg.body = "The following service is not running on " + $serverNM + " and has attempted to be restarted: " + $Service

     #Sending email
     $smtp.Send($msg)
 
}

##############################
# Get service status
##############################
if($args[0] -ne $NULL){

	$serviceName = $args[0]
	$serverName = hostname

	$status = (Get-Service $serviceName).Status

	if ($status -ne "Running"){
		sendMail "$serviceName" "$serverName"
		Restart-Service $ServiceName
	}else{
			# Service is running, do nothing;
	}
}



Validate Media Object files physically exist in JD Edwards EnterpriseOne

24 Aug 2012

Media Objects are method to store a file attachment to a record within the EnterpriseOne ERP system. The gist of the process is there is a table (F00165) that will store a pointer to a physical file location stored on a server. Occasionally these physical files will be deleted due to bugs within EnterpriseOne (ex: OLE’s being linked to more than 1 transaction, etc…) or by user error where the file is physically deleted from the media object server.

I created this powershell script to go through F00165 media object table and verify that the file still physically exists. If it does not it will write a record to a CSV file to be reviewed later. Please note this is performed on an Oracle database, but can be application to any database with minor modifications.



##################################################################
# Validate_Media_Object_Files.ps1
# Created: 12/20/2010
# Author: Me
# Summary: Fetch all file and OLE media object records from the F00165 table.
# Then verify the file exists in the location specified (should always be media object server.
##################################################################
#------------------------
# Inits
#------------------------
[Reflection.Assembly]::LoadFile("C:\ora10g\odp.net\bin\2.x\Oracle.DataAccess.dll")
$connectionString = "Data Source=XXXXX;User Id=XXXXXXXX;Password=tXXXXXXX;"
$connection = New-Object Oracle.DataAccess.Client.OracleConnection($connectionString)
$connection.Open()
$queryString = "SELECT GDOBNM, GDTXKY, GDMOSEQN, GDGTMOTYPE, GDGTFILENM, GDUSER, GDUPMJ, GDTDAY FROM F00165 WHERE GDGTMOTYPE IN (2, 5)"
$command = new-Object Oracle.DataAccess.Client.OracleCommand($queryString, $connection)
$mediaObjRS = $command.ExecuteReader()
$files = @() # Initialize array outside loop ...
$counter = 0
$filedate = get-date -format "M-d-yyyy"
$outputFilename = "MissingMediaAttach" + $filedate + "_PROD.csv"

#---------------------------------------
# Loop through recordset
#---------------------------------------
while ($mediaObjRS.read()) {

   # Assign variables from recordset.
   $object = $mediaObjRS.GetString(0)
   $objectKey = $mediaObjRS.GetString(1)
   $objectSeq = $mediaObjRS.GetDecimal(2)
   $objectType=$mediaObjRS.GetDecimal(3)
   $objectFileName = $mediaObjRS.GetString(4)
   $objectUser = $mediaObjRS.GetString(5)
   $objectDate=$mediaObjRS.GetDecimal(6)
   $objectTime=$mediaObjRS.GetDecimal(7)

        #-------------------------------------------------------
        # Check if file exists based on filetype.
        #-------------------------------------------------------
        if($objectType -eq 2){
                # Type 2 = OLE
                $fileLocation = "\\jdemed02\D$\E812\MEDIAOBJ\Oleque\" + $objectFileName
        }elseif($objectType -eq 5){
                # Type 5 = FILE
                $fileLocation = $objectFileName
        }

        # Skip test if media object points to network share
        if($fileLocation.substring(1,2) -ne ":\"){
                # Skip test if URL
                if(!$fileLocation.tolower().StartsWith("http")){
                        $fileExists = Test-Path $fileLocation

                        if(!$fileExists){
                                #Write to output file
                                $obj = new-object psobject -prop @{
                                           Object = $object
                                           FileName = $objectFileName
                                           Type = $objectType
                                           Seq = $objectSeq
                                           Key = $objectKey
                                           User = $objectUser
                                           Date = $objectDate
                                           Time = $objectTime
                                   }
                                   $files += $obj
                                   $counter = $counter + 1
                                   Write-Host "$objectKey $objectSeq $objectType $objectFileName"
                        }
                }
        }
}

$files | Export-Csv $outputFilename

$connection.Close()

Write-Host "Record Count: " $counter