EMR and Device Integration

June 23, 2007

Biomedical/Bedside/ICU Device Integration

In the words of the guru Tim Gee – Medical Connectivity Consulting “Medical device integration is a critical (and an often overlooked) part of EMR planning. To be successful, any plan must take into account many more considerations beyond getting an HL7 feed into the EMR. Multiple stakeholders including nursing and clinical/biomedical engineering must be engaged. Putting together a successful long term plan requires negotiations across traditional hospital silos, and an in depth understanding of point-of-care workflows, medical device connectivity and device vendor offerings and product strategies”.

The benefits of automatic data collection (heart rate, invasive/non-invasive blood pressure, respiration rate, oxygen saturation, blood glucose, etc.) from acute care monitoring devices have become so obvious that all hospitals now require that their clinical information system (CIS), anesthesia information management system (AIMS), electronic medical records (EMR), electronic patient record system (EPR), or other hospital/healthcare information system (HIS) provide interfacing capabilities to biomedical devices – in order to ensure that key vital signs are stored in the Centralized Data Repository (CDR) – to track patient progress over time.

Patient monitoring systems are among the first to be integrated; because each HIS require at least basic patient vital sign collection. Integration with anesthesia devices is a must for any AIMS. Data collection from ventilation systems is required in most cases for ICU systems. Infusion device data integration is becoming increasingly requested in cases where CPOE systems are implemented.

But connecting to bedside medical devices and collecting data in your CIS or EPR is not as simple as it may seem. Device interface development is a specialized task that consumes resources and diverts attention away from core competencies. Competitive issues make obtaining device protocols difficult and sometimes impossible. Incomplete connectivity results in frustration and decreased efficiency of the hospital.

The various questions you need to have when integrating devices with a HIS are as below:

Categories of Medical Devices for Integration:

Vital Signs or Diagnostics devices
Infusion Pumps
Dialysis devices
Anesthesia machines
EKG and EEG devices
Endoscopy devices
Glucometers
Urimeters
Bedside devices
Oximeters with Patient Monitoring and Alarm Systems
Ventilators
Ultrasound devices
Stress testing devices

Type of Device Connectivity to the HIS

Wireless/Mobile
Fixed

Format of Message feed from Device(s) to the HIS

HL7 format result messages with possible Images, etc across TCP/IP
Proprietary format messages across TCP/IP
Binary format data across USB or others

Format of Message feed to Device(s) from the HIS

HL7 format ADT messages across TCP/IP
Proprietary format messages across TCP/IP
Binary format data across USB or others

Frequency and Location of Device Data Feed to the HIS

Continuous (Periodic) Real-time – 1 message per minute or less
Manual (Aperiodic) or on-demand
Server-based – with storage for real-time data and polling-frequency options
Location:ICU or PACU
Timing Syncronization among all the connected systems is important

Grouping of Device Data in the HIS is based on:

Patient Chart sections
Department Needs and Security Roles
Common Device Parameters
Dependent Device Parameters
Device Monitoring and Asset Tracking
Display and Storage of the data – claims, clinical encounters, drug/pharmacy, lab, images – captured and mapped to a common format, possibly ASTM’s Continuity of Care Record (CCR).

Security Issues:

Caregivers need access to validate device data onto the patient chart
Audit trail and enterprise timestamps on device data
High speed secure network with firewalls to protect ePHI
FDA guidelines compliance
HIPAA guidelines compliance
JCAHO guidelines compliance
Legal guidelines compliance

Examples:

Vital Signs mobile devices feed patient data to the EMR and a senior RN can review results before they are attached to the patients’ charts.
Infusion Pumps drug/fluid delivery tracking in EMR for long term critical care.
Enabling medical devices, such as infusion pumps, ECG machines and glucometers, to wirelessly send data from the ICU to a patient’s medical record or to a physician
Home care chronic disease monitoring systems that provide patient feedback, patient monitoring and alerts (to both patients and physicians) to the EMR.

Software for Device Integration with the HIS:

Capsule Technologie’ DataCaptor is a generic, third-party software + hardware suite that provides the most complete biomedical device connectivity solution available on the market. DataCaptor has the largest library of supported devices – more than 250 diverse bedside devices, advanced features, and easy integration with hospital information systems.

Stinger MedicalIntegriti – provides a secure and mobile method of transmitting patient vital signs wirelessly to the EMR.
Current Capsule Technologie – DataCaptor – OEM partners include (among other HIS vendors of all sizes)

Epic Systems (EpicCare) ,
Philips Medical Systems (CareVue Chart/IntelliVue through DeviceLink),
Eclipsys Corporation (Sunrise Clinical Manager) and
Surgical Information Systems (anesthesia software and surgical system).

Benefits of Device Integration:

As in several hospitals; the reasons for integrating devices is to automate the flow of data and interface it to the HIS application:
• To reduce transcription/documentation errors. Currently, nurses manually transcribe the data from scratch pads or from the devices onto the patient report resulting in problems like indecipherable handwriting, data in the wrong chart, vital signs written on scraps of paper (hands, scrub suits, etc.) that get forgotten, and then there is sometimes considerable lag between readings and reporting.
• To decrease documentation time. Significant increases in productivity can be gained by an interface that allows the nurse to validate rather than enter the data.
• To support quality data collection (charts, images, vitals) and to provide increased surveillance for critical patients – even when the care-provider is not present at the bedside. This allows for safe collection of data over time, thus providing a more accurate and valid history of patient progress.
• To increase patient safety. Safety is enhanced by decreasing data entry errors, and by allowing the nurse to review data collected when he/she is not present at the bedside. In addition the data can be captured at an increased frequency creating a more accurate depiction of the patient’s condition.
• To enable research and quality control. Data can be collected for future analysis by de-identifying patient demographics.
• To provide better patient care and more physician – patient contact time. A silent factor of a hospital’s revenue is quality of patient care. One of the chief drivers of quality of patient care is the quality of information provided efficiently to the Physicians though which they can make those critical decisions.
• To securely and quickly share assessment, diagnosis, treatment and patient progress data across facilities/RHIO (regions)/states thereby enabling the patient to be provided the best care anywhere.
• To reduce patient, physician and nurse stress and legal issues.
• To provide complete and comprehensive data on patient charts.
• To enable future devices to seamlessly connect to the existing EMR.
• To prevent errors in diagnosis, prescription and medication, by basing decisions on the entire patient history/allergies, the latest medications and the latest technology that are available to the patient and the care provider.
• Clinical (or Diagnostic) Decision Support Systems [ CDSS ] and Best Practice systems are more effective with a comprehensive and secure digital files (historical patient charts).
• To increase security and prevent tampering of Patient Records – since all data is digital and secured via layers of Role based security, by HIPAA and by Digital laws – the security is much more comprehensive than a system with voluminous paper records and difficult audit trails.
• Finally, to improve overall hospital throughput and patient hospital-visit time, success ratios and Improving Patient Efficiency Throughput.

I’ve linked the Capsule Technologie-DataCaptor architecture diagrams below to show the data flow between DataCaptor (the server), Concentrator (the ‘router’ or Terminal box), the bedside devices and the HIS and other systems.

http://capsuletech.com/images/stories/products/ConnectDC_470
http://capsuletech.com/images/stories/products/DC_Overview_520.jpg

Note:This article is based on personal experiences and public information gathered from websites including Medical Connectivity Consulting and Capsule Technologies and other medical device manufacturer’s web-sites. Thanks to these companies for this public information and this document is intended solely for personal reading and understanding of this technology and is not for any commercial gain.

Since PACS is a type of “Device Integrator”, the following is an addition to the above article:


Radiology RIS, PACS and the EMR Integration

The PACS – Picture Archiving and Communication System – is a filmless method of communicating and storing Xrays, CT/MRI/NM scans, and other radiographs that are verified by Radiologists after being acquired by the Xray, CT/MRI/NM machines and other variants used in the Radiology Department. Images may be acquired from a patient in slices and with 3D or 4D image reconstruction – the entire patients’ full body scan may be visualized on diagnostic quality workstations. Key images, Radiology reports and low resolution non-diagnostic images are provided for viewing on any screen – securely across the internet. If bandwidth permits – in certain cases – entire diagnostic quality images may be viewable, securely across the internet.

The RIS – Radiology Information System – enables “Radiology” patient scheduling, reporting/dictation, and image tracking to ensure that the PACS and the Radiology machines are effectively utilized and the patients’ structured reports are immediately available.

The EMR – Electronic Medical Records System or Hospital Information System – provides a “global” view or patient historical folder of the patients visits or encounters with his/her care providers. From a “Radiology” perspective – the EMR sends ADT/orders to a RIS and receives results including patient images and data from the PACS (via RIS) – thus enabling access to that patients Structured Reports in a single uniform location in the EMR. Thus, images can be integrated with the radiology report, and with other patient information systems’ (such as laboratory, pharmacy, cardiology, and nursing) reports, thereby providing a comprehensive folder on the patient.

Key Features of a good PACS System are:

  • Modules for comparison study of prior patient images, along with similar cases
  • Modules for Computer Aided Detection using Clinical Decision Support Systems and Key Facets
  • Excellent Data Compression Techniques to ensure effective network utilization and high speed transfers of quality images to workstations and other systems.
  • Excellent EMR Integration based on IHE Integration Profiles, standard HL7, standard DICOM and the support for secure,high-speed access to patient images via internet
  • Standard Security Features along with audit trails and Integration with RIS and EMR security.
  • Modules for 3D and 4D reconstruction of CT slices, Image Enhancement and Quality Printing
  • Immediate availability of Images on network or CD/DVD for quick diagnosis and review by remote Radiologists/experts.
  • Excellent Short Term Storage with very low retrieval time latencies.
  • Excellent Long Term Storage with decent retrieval time latencies and predictable data recovery.
  • Excellent RIS Integration.
  • Extensively tested and successfully working in other hospitals for 2 years at least.
  • Multiple vendor modality Integration features.
  • Downtime plan with Disaster Recovery Support.
  • Easy Upgrade-ability of hardware/storage to ensure almost infinite storage based on hospital need
  • Support for Patient De-Identification and Reporting off the PACS/RIS for data analysis.

Now that you have (selected) the PACS and RIS systems, here is the list of questions you should have regarding integration with the EMR:


EMR and RIS/PACS Integration Issues:

  • RIS/PACS features and limitations
  • Modality support for DMWL (Digital Modality Worklist – ensuring correct patient scans at modality)
  • Key Data Mappings between the RIS, PACS and EMR (eg. Study-DateTime, PatientID,Provider, Study Status, Accession number, etc.)
  • Department Workflow changes (Types of Orders, Downtime Orders, Unsolicited Results, Billing, etc.)
  • Data being displayed in the Modality Worklist and when does this worklist get updated?
  • Historical data import, cut-off dates, access policies to legacy data, etc
  • Security, User access and integrating the PACS/RIS users with the EMR users to enable secure web access to images.


Note:
The above article is based on personal experience and is not for any commercial gain.

Advertisements

Automated Workflow Environments and EMR

October 30, 2006

Well, we work in the next era of software development, not only designing applications, but also developing systems that communicate with each other, thus participating in a workflow.

Automating this workflow through the seamless integration of these apps is a task that challenges many of the industries that we work in.

Automated Workflow Environments are those systems where multiple systems contribute and communicate to enable a network of these apps to actually solve complex problems very efficiently, with no human interaction. You can call them Digital Ecosystems.

You can construct workflow nets to describe the complex problems that these systems efficiently solve. Workflow nets, a subclass of Petri nets, are known as attractive models for analyzing complex business processes. Because of their good theoretical foundation, Petri nets have been used successfully to model and analyze processes from many domains, like for example, software and business processes. A Petri net is a directed graph with two kinds of nodes – places and transitions – where arcs connect ‘a place’ to ‘a transition’ or a transition to a place. Each place can contain zero, one or more tokens. The state of a Petri net is determined by the distribution of tokens over places. A transition can fire if each of its inputs contains tokens. If the transition fires, i.e. it executes, it takes one token from each input place and puts it on each output place.

In a hospital environment, for example, the processes involved, show a complex and dynamic behavior, which is difficult to control. The workflow net which models such a complex process provides a good insight into it, and due to its formal representation, offers techniques for improved control.

Workflows are case oriented, which means that each activity executed in the workflow corresponds to a case. In a hospital domain, a case corresponds with a patient and an activity corresponds with a medical activity. The process definition of a workflow assumes that a partial order or sequence exists between activities, which establish which activities have to be executed in what order. Referring to the Petri net formalism, workflow activities are modeled as transitions and the causal dependencies between activities are modeled as places and arcs. The routing in a workflow assumes four kind of routing constructs: sequential, parallel, conditional and iterative routing. These constructs basically define the route taken by ‘tokens’ in this workflow.

Well, enough theory, how does this apply?

Think of this in practical terms using the example of a EMR* or CPR* System or HIS* System:
• A patient arrives at a hospital for a consultation or particular set of exams or procedures.
• The patient is registered, if new to the hospital. A visit or encounter record is created in the Patient Chart (EMR) – with vitals, allergies, current meds and insurance details.
• The physician examines the patient and orders labs, diagnostic exams or prescription medications for the patient possibly using a handheld CPOE*
• The patient is scheduled for the exams in the RIS – radiology info system or LIS – laboratory info system or HIS (hospital info system)
• The RIS or LIS or HIS sends notifications to the Radiology and/or Cardiology and/or Lab or other Departments in the hospital through HL7 messages for the various workflows.
• The various systems in these departments will then send HL7 or DICOM or proprietary messages to get the devices or modalities, updated with the patient data (prior history, etc.)
• The patient is then taken around by the nurses to the required modalities in the exam/LAB areas to perform the required activities.
• The patient finishes the hospital activities while the diagnosis continues and the entire data gathered is coalesced and stored in rich structured report or multimedia formats in the various repositories – resulting in a summary patient encounter/visit record in the Electronic Patient Record in the EMR database.
• There could also be other workflows triggered – pharmacy, billing,.
• The above is just the scenario for an OUTPATIENT, there are other workflows for INPATIENT – ED/ICU/other patients.

The key problems in this ‘Automated Workflow Environment’ are:

• Accurate Patient Identification and Portability to ensure that the Patient Identity is unique across multiple systems/departments and maybe hospitals. The Patient Identity key is also essential to Integrating Patient healthcare across clinics, hospitals, regions(RHIO) and states.
• Support for Barcode/RFID on Patient Wrist Bands, Prescriptions/Medications, Billing (using MRN, Account Number, Order Number,Visit Number), etc to enable automation and quick and secure processing.
• Quick Patient data retrieval and support for parallel transactions
Audits and Logs for tracking access to this system
• Support for PACS, Emergency care, Chronic care (ICU / PACU), Long Term care, Periodic visits, point of care charting, meds administration, vital signs data acquisition, alarm notification, surveillance for patient monitors, smart IV pumps, ventilators and other care areas – treatment by specialists in off-site clinics, etc.
• Support for Care Plans, Order sets and Templates, results’ tracking and related transactions.
• Quick vital sign results and diagnostic reporting
• Effective display of specialty content – diagnostic/research images, structured “rich” multimedia reports.
Secure and efficient access to this data from the internet
Removal of paper documentation and effective transcription
SSO-Single Sign On, Security roles and Ease of use for the various stakeholders – here, the patient, the RN, physician, specialist, IT support etc.
Seamless integration with current workflows and support for updates to hospital procedures
Modular deployment of new systems and processes – long term roadmap and strategies to prevent costly upgrades or vendor changes.
HIPAA, JCAHO and Legal compliance – which has an entire set of guidelines – privacy, security being the chief one.
• Efficient standardized communication between the different systems either via “standard” HL7 or DICOM or CCOW or proprietary.
• Support for a High speed Fiber network system for high resolution image processing systems like MRI, X-Ray, CT-SCAN, etc.
• A high speed independent network for real time patient monitoring systems and devices
• Guaranteed timely Data storage and recovery with at least 99.9999% visible uptime
• Original Patient data available for at least 7 years and compliance with FDA rules.
Disaster recovery compliance and responsive Performance under peak conditions.
• Optimized data storage ensuring low hardware costs
Plug ‘n’ Play of new systems and medical devices into the network, wireless communication among vital signs devices and servers, etc.
Location tracking of patients and devices (RFID based) and Bed Tracking in the hospital
Centralized viewing of the entire set of Patient data – either by a patient or his/her physician
Multi-lingual user interface possibilities (in future?)
Correction of erroneous data and merging of Patient records.
Restructuring existing hospital workflows and processes so that this entire automated workflow environment works with a definite ROI and within a definite time period!
• Integration with billing, insurance and other financial systems related to the care charges.
Future proof and support for new technologies like Clinical Decision Support (CDSS) – again a long term roadmap is essential.

ROI: How does a hospital get returns on this IT investment?

  1. Minimization of errors – medication or surgical – and the associated risks
  2. Electronic trail of patient case history available to patient, insurance and physicians
  3. Reduced documentation and improvement in overall efficiency and throughput
  4. Patient Referrals from satellite clinics who can use the EMR’s external web links to document on patients – thus providing a continuous electronic report
  5. Possible pay-per-use by external clinics – to use EMR charting facilities
  6. Remote specialist consultation
  7. Efficient Charges, Billing and quicker settlements
  8. Better Clinical Decision Support – due to an electronic database of past treatments
  9. In the long term, efficiency means cheaper insurance which translates to volume income
  10. Better compliance of standards – HIPAA, privacy requirements, security
  11. Reduced workload due to Process Improvement across departments – ED, Obstetrics/Gynecology, Oncology/Radiology, Orthopedic, Cardiovascular, Pediatrics, Internal Medicine, Urology, General Surgery, Ophthalmology, General/family practice, Dermatology, Psychiatry
  12. Improved Healthcare with Proactive Patient Care due to CDSS
  13. Quality of Patient Care: A silent factor of a hospital’s revenue is quality of patient care. One of the chief drivers of quality of patient care is the quality of information provided efficiently to the Physicians though which they can make those critical decisions

Now, the big picture becomes clear.

Doesn’t the above set of requirements apply to any domain? This analysis need not be applicable only to a hospital domain, the same is true for a Biotech domain (where orders are received, data is processed, analyzed, and the processed data is presented or packaged). Similarly a Manufacturing Domain, Banking domain or Insurance Domain etc.

The need is for core engine software – based on EDI (Electronic Data Interchange) – that integrate and help in the Process Re-Engineering of these mini workflows securely and effectively and using common intersystem communication formats like X-12 or HL7 messages.

These Workflow Engines would be the hearts of the digital world!

Buzzwords:
*EMR – Electronic Medical Record
*CPR – Computerized Patient Record
*CDSS – Clinical Decision Support
*RHIO – Regional Health Information Organization
*CPOE – computerized physician order entry

Some of the information presented here is thanks to research papers and articles at:
*Common Framework for health information networks
*Discovery of Workflow Models for Hospital Data
*Healthcare workflow
*CCOW-IHE Integration Profiles
*Hospital Network Management Best Practices
*12 Consumer Values for your wall

What about the latest IT trends and their applications in healthcare?

We already know about Google Earth and Google Hybrid Maps and the advantages of Web 2.0
The next best thing is to search the best shopping deal or the best real estate by area and on a hybrid map – this recombinant web application reuse technique is called a mashup or heat map.
Mashups have applications in possibly everything from Healthcare to Manufacturing.
Omnimedix is developing and deploying a nationwide data mashup – Dossia, a secure, private, independent network for capturing medical information, providing universal access to this data along with an authentication system for delivery to patients and consumers.

Click on the below links to see the current ‘best in class mash ups
*After hours Emergency Doctors SMS After hours Emergency Doctors SMS system – Transcribes voicemail into text and sends SMS to doctors. A similar application can be used for Transcription Mashup (based on Interactive Voice Response – IVR): Amazon Mturk, StrikeIron Global SMS and Voice XML
* Calendar with Messages Listen to your calendar + leave messages too Mashup (based on IVR): 30 Boxes based on Voxeo , Google Calendar
* http://www.neighboroo.com/ – Housing/Climate/Jobs/Schools
* Visual Classifieds Browser – Search Apartments, visually
* http://www.trulia.com/ – Real Estate/Home pricing
* http://www.rentometer.com/ – Rent comparison
* http://realestatefu.mashfu.com/ – Real Estate Statistical Analysis
* http://www.housingmaps.com/ – Rent/Real Estate/Home pricing – linked to Craigslist
* http://virtualtourism.blogspot.com/ – Google Maps + Travel Videos
* http://www.coverpop.com/wheeloflunch/ – Wheel of Zip Code based restaurants
* More sample links at this site (unofficial Google mashup tracker) http://googlemapsmania.blogspot.com/ includes some mentionable sites :
* latest news from India by map http://www.mibazaar.com/news/
* read news by the map – slightly slow http://lab.news.com.au/maps/v01
* view news from Internet TV by map – http://5tvs.com/internet-tv-maps/news/
* see a place in 360 http://www.seevirtual360.com/Map.aspx

What’s on the wish list ? Well, a worldwide mashup for real estate, shopping, education, healthcare will do just fine. Read on to try out YOUR sample…
OpenKapow: The online mashup builder community that lets you easily make mashups. Use their visual scripting environment to create intelligent software Robots that can make mashups from any site with or without an API.
In the words of Dion HinchCliffe, “Mashups are still new and simple, just like PCs were 20 years ago. The tools are barely there, but the potential is truly vast as hundreds of APIs are added to the public Web to build out of”.
Don also covers the architecture and types of Mashups here with an update on recombinant web apps

Keep up to date on web2.0 at http://blog.programmableweb.com/

Will Silverlight and simplified vector based graphics and workflow based – xml language – XAML be the replacement for Flash and JavaFX?

Well, the technology is promising and many multimedia content web application providers including News channels have signed up for Microsoft SilverLight “WPF/E” due to the light weight browser based viewer streaming “DVD” quality video based on the patented VC-1 video codec.

Microsoft® Silverlight™ Streaming by Windows Live™ is a companion service for Silverlight that makes it easier for developers and designers to deliver and scale rich interactive media apps (RIAs) as part of their Silverlight applications. The service offers web designers and developers a free and convenient solution for hosting and streaming cross-platform, cross-browser media experiences and rich interactive applications that run on Windows™ XP+ and Mac OS 10.4+.

The only problem is LINUX is left out from this since the Mono Framework has not yet evolved sufficiently.

So, the new way to develop your AJAX RIA “multimedia web application” is – design the UI with an Artist in Adobe Illustrator and mashup with your old RSS, LINQ, JSON, XML-based Web services, REST and WCF Services to deliver a richer scalable web application.


Is it Knoppix or PCLinuxOS time?

September 19, 2006

Not yet tired of windows? Well, read on to find out the next gen O/S out there…

* New capabilities – creating a remastered custom Live CD/DVD and booting from a USB flash

Let’s see where do I start…

Well, my machine crashed – it’s a new Acer – http://global.acer.com/4152 NLCI (4150 or 4650 series) laptop- and the problem was maybe some virus, because the ntoskrnl.exe was corrupted, and the drivers just stopped loading after a few seconds of boot time, bringing the hard disc to a halt and the screen to a frozen white screen (talk about the old blue screen being upgraded by MS!!!)

Time to look around for a boot-able and repair kit right! I tried – got an XP cd and installed XP onto a different partition – but the next day – that too went into the same loop like the above.

Then, I found out that maybe the hard disc may have got some permanent damage – thanks to all the travel lately.

I look around for a solution other than DOS obviously and I find the best free O/S ever!

Knoppix — the free open source Linux O/S that boots and runs from a CD/DVD (like PCLinuxOS 2007), with automatic hardware detection, recognizing all the devices – Graphics Card, Sound Card, USB, CD, DVD, RAID, Modem, Lan Card, printer-scanner(hp all-in-one), pcmcia cards, sd card(gateway works with PCLinuxOS), bluetooth(works with PCLinuxOS), wireless(works with PCLinuxOS),modem, external camera, webcam – you name it and its a solid system that detects, installs and boots fast (PCLinuxOS 2007 boots in 30 seconds), and runs quickly on ordinary hardware! It can even mount my NTFS hard disc in read mode (ntfs-3g driver supports read and write of NTFS files) to copy files to my USB so my critical files are saved in no time. Of course, part of the recovery process is being able to write CD and DVD’s which Knoppix (and PCLinuxOS) supports with a right click menu.

Knoppix – a flavor of Debian Linux – http://www.knopper.net/knoppix/index-en.html or http://www.knoppix.net/ – is a 640MB bootable Live CD with a collection of GNU/Linux s/w, which brings up a Windows like user interface, connects to the internet (after a minor config) through a DSL or Cable modem and voila – I’m online via Mozilla, Opera or Konqueror. Knoppix 5.1.1+ is a debian flavour of the Linux O/s with the Linux kernel 2.6.19+ , KDE 3.5.5+/GNOME 2.16+, ntfs-3g and it has all the flavours of a true windows system with OpenOffice2.1+ – (Word-documents,Excel-spreadsheets,PPT-presentations) , PDF reader, kaffeine (media player) and GIMP (for those mspaint users), picasa from google, a host of ntfs data recovery tools and good card games!

PCLinuxOS 2007 TR4is a free open source Linux O/S (flavor of Mandrake/Mandriva Linux with Linux kernel 2.6.22.10+, KDE 3.5.7+/GNOME 2.16+, Open Office 2.3+, 3D windows support), has all the above software modules, and it is much better than the Knoppix version as of PCLinuxOS 2007 TR4 versus Knoppix 5.1.1 – since PCLOS can be remastered and installed on a usb flash drive very easily.

Live CD Knoppix (like Live CD PCLinuxOS) uses on-the-fly decompression to load into memory, the required modules, from bootable CD/DVD, so the CD is locked by knoppix and you can’t use it for writing or DVD viewing although they have the CD/DVD writer software and the movie viewer software and I can use an external DVD/CD writer/reader to perform CD/DVD burns/reads. You can install the PCLinuxOS 2007 TR4 Live CD to a USB flash drive The other pluses to these Live CDs are: There is already an available messenger – thanks to secure GAIM (now Pidgin) – which can connect to Yahoo, Google Chat, AOL, etc. among others. Kopete is better than other messengers due to WebCam support, but need Knoppix/PCLinux installed to the hard disc. You can also create custom bootable CD/DVDs/USB flash, since the default live CDs are built for a read-only O/s (from the CD) with default options.

Want to listen to quality music? – using StreamTuner on Linux you can listen to live internet streamed (128 to 256kbps) quality radio on xmms, from around the World for free!!!

Security – you can install the necessary Mozilla addon’s and the shorewall or firestarter firewall to boost your experience.
VLC, Gxine, Amarok and Mplayer are very good multimedia programs covering all the needed experience.

Acrobat 8 is available for Linux for the pdf community.

Open Office or Star Office are not perfect but are decent Linux office solutions.

If you are wondering about install time – there is none – since the OS just boots of a CD/USB, you can use all the features of a full fledged O/S, and if you need, you may install the O/S to hard disc in 10 minutes.

So what are the minimum requirements of this new O/s (Vista beware!)

· Intel-compatible CPU (i486 or later),
· 32 MB of RAM for text mode, at least 96 MB for graphics mode with KDE (at least 128 MB of RAM is recommended to use the various office products),
· boot-able CD-ROM drive, or a boot floppy and standard CD-ROM (IDE/ATAPI or SCSI),
· standard SVGA-compatible graphics card,
· serial or PS/2 standard mouse or IMPS/2-compatible USB-mouse.

Before you comment, please note:

  • I know I could have called Acer – http://global.acer.com/ support – since my laptop is within warranty – well, I didn’t call because I needed internet connectivity – not further delays and postal issues with mailing my hard disc out.
  • Knoppix and PCLinuxOS have very good multi-session KDE3.5+/GNOME 2.16+ windows environment which I wanted compared to a lite-r environment like DSL Linux (http://www.damnsmalllinux.org/ – that’s another topic – 50MB linux – usb/mini-cd boot) – another reason not to go for a usb boot was the laptop bios was not upgraded by the manufacturer and the way to upgrade bios is only through a working Windows XP !! Latest: I finally got a usb floppy drive and got my BIOS upgraded, also I created a custom Live CD of PCLinuxOS 2007 TR4 and copied it to a USB flash which is now bootable (don’t forget to install th MBR) and working for Acer 4152 and Gateway MX 6124 laptops.
  • I downloaded Knoppix 3.6 from a friends cable connection and burnt the CD in 10 minutes and was online on my Acer 4152 NLCI laptop in 15 minutes: Latest: I downloaded Knoppix 5.1.1 Live DVD and PCLinuxOS 2007 TR4 Live DVD – PCLinuxOS rates better with custom Live DVD/USB flash boot support.
  • All of the needed drivers and software including Gaim, Office(Word,Excel,PPT), PDF viewer, printer-scanner drivers, rss reader, were on the CD so nothing to install – unlike ms windows, where a plain o/s is useless!!!

Knoppix 3.6 Problems and PCLinuxOS 2007 TR4 Advantages:

  • The laptop CD/DVD drive is being fully used/locked and I cannot eject it – when I log into the Knoppix Live O/s. This is by design, since the modules are dynamically loaded from the CD. You can burn custom CD’s by downloading the necessary softwares by apt-get
  • I downloaded PCLinuxOS 0.93 and later upgraded to PCLinuxOS 2007 TR4 and they install to hard disc easily with a single click, also they have a neat way to create a custom Live CD/DVD, which can be created in 20 minutes flat using mklivecd/k3b
  • Webcam with messenger is an issue with gaim – PCLinuxOS 2007 fixed this problem with a new version of Kopete and many more web camera tools.
  • WPA Wireless security is not supported with default ndiswrapper, you may still have to use the windows wireless card drivers – otherwise PCLinuxOS 2007 makes secure wireless a breeze, with support for secured 128 bit WEP.
  • The apt-get feature (combined with Synaptic of PCLinuxOS) of most Linux flavors (yast in OpenSuse) is great to keep your specific O/S features upto-date and remove broken packages. The control you have over your custom machine software is simply great.
  • I can’t access the SD Card inserted in the proprietary Acer Laptop Texas Instruments SD/MMC Card Reader – but I can connect the Kodak digital camera via USB and the photos can be uploaded. Latest: PCLinuxOS 2007 TR4 has a fix for many Gateway laptop SD card reader issues in their support forums for most SD cards and multi card readers.
  • I can’t read MS Visio documents (but this in development and I will use visio converted to jpegs till then).
  • Open Office is not yet a very mature software and cannot reasonably compare to MS Office 2003 in either features or printing of Excel documents.
  • PCLinuxOS 2007 has Beryl which is a decent 3D windows manager with nice 3D windows effects – but 3D windows is still not a very refined concept and I would suggest uninstalling the compiz and beryl 3D software.

Well, that’s it from me, see ya… Keep your Knoppix CD/DVD or PCLinuxOS CD/DVD/USB ready…. as we say in Linux there is true consumer choice even though I personally vote for PCLinuxOS 2007 TR4

Oh ya.. Knoppix supports clusters and a multi-computer version is out called “ParallelKnoppix” which converts a host of windows machines into a Linux Cluster Farm. Descriptions are here – http://idea.uab.es/mcreel/ParallelKnoppix/ http://www.knoppix.net/wiki/Cluster_Live_CD

Howto? – Another useful site to learn to use Linux – http://www.linux.ie/articles/tutorials/

Some more flavors that receive good desktop Linux reviews are

Other good sites with helpful linux links:


Code Review Checklist

February 16, 2006

Following is a check list I refer to often which catches many issues often:

1. No errors should occur when building the source code. No warnings should be introduced by changes made to the code. Also, any warnings during the build should be within acceptable boundaries with good reasoning.

2. Each source file should start with an appropriate header and copyright information. All source files should have a comment block describing the functionality provided by the file.

3. Describe each routine, method, and class in one or two sentences at the top of its definition. If you can’t describe it in a short sentence or two, you may need to reassess its purpose. It might be a sign that the design needs to be improved and routines may need to be split into smaller more reusable units. Make it clear which parameters are used for input and output.

4. Comments are required for aspects of variables that the name doesn’t describe. Each global variable should indicate its purpose and why it needs to be global.

5. Comment the units of numeric data. For example, if a number represents length, indicate if it is in feet or meters.

6. Complex areas, algorithms, and code optimizations should be sufficiently commented, so other developers can understand the code and walk through it.

7. Dead Code: There should be an explanation for any code that is commented out. “Dead Code” should be removed. If it is a temporary hack, it should be identified as such.

8. Pending/TODO: A comment is required for all code not completely implemented. The comment should describe what’s left to do or is missing. You should also use a distinctive marker that you can search for later (For example: “TODO:”).

9. Are assertions used everywhere data is expected to have a valid value or range? Assertions make it easier to identify potential problems. For example, test if pointers or references are valid.

10. An error should be detected and handled if it affects the execution of the rest of a routine. For example, if a resource allocation fails, this affects the rest of the routine if it uses that resource. This should be detected and proper action taken. In some cases, the “proper action” may simply be to log the error and send an appropriate message to the user.

11. Make sure all resources and memory allocated are released in the error paths. Use try-catch-finally in C++/C# code. Is allocated memory (non-garbage collected) freed? All allocated memory needs to be freed when no longer needed. Make sure memory is released in all code paths, especially in error code paths. Unmanaged objects such as File/Sockets/Graphics/Database objects in C#/Java need to be destructed at the earliest time. File, Sockets, Database connections, etc. (basically all objects where a creation and a deletion method exist) should be freed even when an error occurs. For example, whenever you use “new” in C++, there should be a delete somewhere that disposes of the object. Resources that are opened must be closed. For example, when opening a file in most development environments, you need to call a method to close the file when you’re done.

12. If the source code uses a routine that throws an exception, there should be a function in the call stack that catches it and handles it properly. There should not be any abnormal terminations for expected flows and also the user should be informed of any un-recoverable situations.

13. Does the code respect the project coding conventions? Check that the coding conventions have been followed. Variable naming, indentation, and bracket style should be used. Use FXCop and follow C++/C# coding conventions/guidelines within acceptable limits.

14. Consider notifying your caller when an error is detected. If the error might affect your caller, the caller should be notified. For example, the “Open” methods of a file class should return error conditions. Even if the class stays in a valid state and other calls to the class will be handled properly, the caller might be interested in doing some error handling of its own.

15. Don’t forget that error handling code that can be defective. It is important to write test cases for error handling cases that exercise it.

16. Make sure there’s no code path where the same object is released more than once? Check error code paths.

17. COM Reference Counting: Frequently a reference counter is used to keep the reference count on objects (For example, COM objects). The object uses the reference counter to determine when to destroy itself. In most cases, the developer uses methods to increment or decrement the reference count. Make sure the reference count reflects the number of times an object is referred. Similarly tracing is important in code to validate the flow.

18. Thread Safety: Are all global variables thread-safe? If global variables can be accessed by more than one thread, code altering the global variable should be enclosed using a synchronization mechanism such as a mutex. Code accessing the variable should be enclosed with the same mechanism.

19. If some objects can be accessed by more than one thread, make sure member variables are protected by synchronization mechanisms.

20. It is important to release the locks in the same order they were acquired to avoid deadlock situations. Check error code paths.

21. Database Transactions: Always Commit/Rollback a transaction at the earliest possible time. Keep transactions short.

22. Make sure there’s no possibility for acquiring a set of locks (mutex, semaphores, etc.) in different orders. For example, if Thread A acquires Lock #1 and then Lock #2, then Thread B shouldn’t acquire Lock #2 and then Lock #1.

23. Are loop ending conditions accurate? Check all loops to make sure they iterate the right number of times. Check the condition that ends the loop; insure it will end out doing the expected number of iterations.

24. Check for code paths that can cause infinite loops? Make sure end loop conditions will be met unless otherwise documented.

25. Do recursive functions run within a reasonable amount of stack space? Recursive functions should run with a reasonable amount of stack space. Generally, it is better to code iterative functions with proper/predictable end conditions.

26. Are whole objects duplicated when only references are needed? This happens when objects are passed by value when only references are required. This also applies to algorithms that copy a lot of memory. Consider using algorithm that minimizes the number of object duplications, reducing the data that needs to be transferred in memory. Avoid entire copying objects onto the stack instead use reference objects (most of the time default in C# for large user defined objects)

27. Does the code have an impact on size, speed, or memory use? Can it be optimized? For instance, if you use data structures with a large number of occurrences, you might want to reduce the size of the structure.

28. Blocking calls: Consider using a different thread for code making a function call that blocks or use a monitor thread with a well-defined timeout

29. Is the code doing busy waits instead of using synchronization mechanisms or timer events? Doing busy waits takes up CPU time. It is a better practice to use synchronization mechanisms since they force the thread to sleep without using valuable cpu time.

30. Optimizations may often make code harder to read and more likely to contain bugs. Such optimizations should be avoided unless a need has been identified. Has the code been profiled? Check if any over optimization has led to functionality disappearing.

31. Are function parameters explicitly verified in the code? This check is encouraged for functions where you don’t control the whole range of values that are sent to the function. This isn’t the case for helper functions, for instance. Each function should check its parameter for minimum and maximum possible values. Each pointer or reference should be checked to see if it is null. An error or an exception should occur if a parameter is invalid.

32. Make sure an error message is displayed if an index is out-of-bound. This can happen in C# too for dynamically created lists, etc.

33. Make sure the user sees simple error messages, not technical jargon.

34. Don’t return references to objects declared on the stack, return references to objects created on the heap. In C# whenever new is called a new heap object is created, but if an object is passed by copy then the stack object would disappear on return resulting in invalid data.

35. Make sure there are no code paths where variables are used prior to being initialized? If an object is used by more than one thread, make sure the object is not in use by another thread when you destroy it. If an object is created by doing a function call, make sure the object was created before using it. The VS.NET C# compiler ensures this so don’t ignore this warning.

36. Does the code re-write functionality that could be achieved by using an existing API/code? Don’t reinvent the wheel. New code should use existing functionality as much as possible. Don’t rewrite source code that already exists in the project. Code that is replicated in more than one function should be put in a helper function for easier maintenance. The existing code/library routines may be already optimized for this operation.

37. Bug Fix Side Effects: Does a fix made to a function change the behavior of caller functions? Sometimes code expects a function to behave incorrectly. Fixing the function can, in some cases, break the caller. If this happens, either fix the code that depends on the function, or add a comment explaining why the code can’t be changed.

38. Does the bug fix correct all the occurrences of the bug? If the code you’re reviewing is fixing a bug, make sure it fixes all the occurrences of the bug.

39. Is the code doing signed/unsigned conversions? Check all signed to unsigned conversions: Can sign completion cause problems? Check all unsigned to signed conversions: Can overflow occur? Test with Minimum and Maximum possible values. Sometimes a downcast from ‘long’ to ‘int’ could mean loss of data, use the larger data type preferably.

40. Ensure the developer has unit tested the code before sending it for review. All the limit and main functionality cases should have been tested.

41. As a reviewer, you should understand the code. If you don’t, the review may not be complete, or the code may not be well commented.

42. Lastly, when executed, the code should do what it is supposed to.

Thanks to the authors at http://www.macadamian.com/index.php?option=com_content&task=view&id=27&Itemid=31


Migrating to ASP.NET 2.0 — Its backward compatible

October 21, 2005

Here are my investigations based on MSDN and a running site at Microsoft since Aug 2005 with better performance than before:

· Because of the way that the .NET Framework is designed, you can deploy the 2.0 framework without disrupting a current installation of the 1.0 or 1.1 frameworks.

To configure a 1.x application’s script map to use the .NET Framework version 2.0

  • On the Start menu, click Run.
  • In the Open box, type inetmgr and click OK.
  • In Internet Information Services (IIS) Manager, expand the local computer, and then expand Web Sites.
  • Select the target Web site that is running in the .NET Framework version 1.x.
  • Right-click the name of the virtual directory for the Web site, and then click Properties.
    The Properties dialog box appears.
  • In the ASP.NET version selection list, choose the .NET Framework version 2.0.
    Click OK.
  • Navigate to a page in your application and confirm that your application runs as expected.

· If you are planning on using ASP.NET 2.0 on a production site, you will need to acquire the Microsoft Visual Studio 2005 Beta 2 Go-Live license (Nov 2005 is the final release of VS .NET 2005, so this may change) http://lab.msdn.microsoft.com/ or http://msdn2.microsoft.com/ . Basically, Microsoft does not offer support for the pre-release products.
· ASP.NET 2.0 and ASP.NET 1.1 Applications can live on the same IIS Server: By default, your 1.x applications will continue to use the 1.x framework. However, you will have to configure your converted/new applications (web sites/virtual directories) to use the 2.0 framework.
· Requirements for hosting ASP.NET 2.0 Apps:
o Internet Information Services (IIS) version 5.0 or later. To access the features of ASP.NET, IIS with the latest security updates must be installed prior to installing the .NET Framework. (So you can run ASP.NET 2.0 apps on old boxes with IIS5-Win 2000 Server)
o ASP.NET is supported only on the following platforms: Microsoft Windows 2000 Professional (Service Pack 3 recommended), Microsoft Windows 2000 Server (Service Pack 3 recommended), Microsoft Windows XP Professional, and Microsoft Windows Server 2003 family.
o Microsoft Data Access Components 2.8; is recommended. This is for applications that use data access.
o Supported Operating Systems: Windows 2000; Windows 98; Windows 98 Second Edition; Windows ME; Windows Server 2003; Windows XP. Make sure you have the latest service pack and critical updates for the version of Windows that you are running. To find recent security updates, visit Windows Update.
o You must also be running Microsoft Internet Explorer 5.01 or later for all installations of the .NET Framework. Install Internet Explorer 6.0 Service Pack 1.

Here’s what we gain:
New Features in ASP.NET 2.0
· Master pages are a new feature introduced in ASP.NET 2.0 to help you reduce development time for Web applications by defining a single location to maintain a consistent look and feel in a site. Master pages allow you to design a template that can be used to generate a common layout for many pages in the application.
· Content pages (I call them business logic sub-pages) are attached to a master-page and define content for any ContentPlaceHolder controls in the master page. The content page contains controls that reference the controls in the master page through the ContentPlaceHolder ID. The content pages and the master page combine to form a single response.
· Nested Master Pages: In certain instances, master pages must be nested to achieve increased control over site layout and style. For example, your company may have a Web site that has a constant header and footer for every page, but your accounting department has a slightly different template than your IT department.
· Overriding Master Pages: Although the goal of master pages is to create a constant look and feel for all of the pages in your application, there may be situations when you need to override certain content on a specific page. To override content in a content page, you can simply use a content control.
· Themes and Skins: ASP.NET 2.0 rectifies the issue of using CSS and inline styles in ASP.NET 1.1 pages, through the use of themes and skins, which are applied uniformly across every page and control in a Web site.A skin is a set of properties and templates that can be used to standardize the size, font, and other characteristics of controls on a page. Themes are similar to CSS style sheets in that both themes and style sheets define a set of common attributes that apply to any page where the theme or style sheet is applied.
· Security: Managing User Info with Profiles and Login Controls: The membership provider and login controls in ASP.NET 2.0 provide a unified way of managing user information. ASP.NET 2.0 offers new login controls to help create and manage user accounts without writing any code.The ASP.NET 2.0 profile features allow you to define, save, and retrieve information associated with any user that visits your Web site. In a traditional ASP application, you would have to develop your own code to gather the data about the user, store it in session during the user’s session, and save it to some persistent data store when the user leaves the Web site.
· Localizaton. Enabling globalization and localization in Web sites today is difficult, requiring large amounts of custom code and resources. ASP.NET 2.0 and Visual Studio 2005 provide tools and infrastructure to easily build Localizable site including the ability to auto-detect incoming locale’s and display the appropriate locale based UI. Visual Studio 2005 includes built-in tools to dynamically generate resource files and localization references. Together, building localized applications becomes a simple and integrated part of the development experience.
· 64-Bit Support. ASP.NET 2.0 is now 64-bit enabled, meaning it can take advantage of the full memory address space of new 64-bit processors and servers. Developers can simply copy existing 32-bit ASP.NET applications onto a 64-bit ASP.NET 2.0 server and have them automatically be JIT compiled and executed as native 64-bit applications (no source code changes or manual re-compile are required).
· Caching Improvements. ASP.NET 2.0 also now includes automatic database server cache invalidation. This powerful and easy-to-use feature allows developers to aggressively output cache database-driven page and partial page content within a site and have ASP.NET automatically invalidate these cache entries and refresh the content whenever the back-end database changes. Developers can now safely cache time-critical content for long periods without worrying about serving visitors stale data.
· Web Parts: Web Parts are modular components that can be included and arranged by the user to create a productive interface that is not cluttered with unnecessary details. The user can:
o Choose which parts to display.
o Configure the parts in any order or arrangement.
o Save the view from one Web session to the next.
o Customize the look of certain Web Parts.
· Better Development Environment: ASP.NET 2.0 continues in the footsteps of ASP.NET 1.x by providing a scalable, extensible, and configurable framework for Web application development. The core architecture of ASP.NET has changed to support a greater variety of options for compilation and deployment. As a developer, you will also notice that many of your primary tasks have been made easier by new controls, new wizards, and new features in Visual Studio 2005. Finally, ASP.NET 2.0 expands the palette of options even further by introducing revolutionary new controls for personalization, themes and skins, and master pages. All of these enhancements build on the ASP.NET 1.1 framework to provide an even better set of options for Web development within the .NET Framework.
· Last but not the least there’s a host of new language features that reduce code lines in .NET 2.0: What’s New in the C# 2.0 Language and Compiler
With the release of Visual Studio 2005, the C# language has been updated to version 2.0, which supports the following new features:
o Generics
Generic types are added to the language to enable programmers to achieve a high level of code reuse and enhanced performance for collection classes. Generic types can differ only by arity. Parameters can also be forced to be specific types. For more information, see Generic Type Parameters.

o Iterators
Iterators make it easier to dictate how a foreach loop will iterate over a collection’s contents.

o Partial Classes
Partial type definitions allow a single type, such as a class, to be split into multiple files. The Visual Studio designer uses this feature to separate its generated code from user code.

o Nullable Types
Nullable types allow a variable to contain a value that is undefined. Nullable types are useful when working with databases and other data structures that may contain elements that contain no specific values.

o Anonymous Methods
It is now possible to pass a block of code as a parameter. Anywhere a delegate is expected, a code block can be used instead: there is no need to define a new method.

o Namespace alias qualifier
The namespace alias qualifier (::) provides more control over accessing namespace members. The global :: alias allows access the root namespace that may be hidden by an entity in your code.

o Static Classes
Static classes are a safe and convenient way of declaring a class containing static methods that cannot be instantiated. In C# version 1.2 you would have defined the class constructor as private to prevent the class being instantiated.

o External Assembly Alias
Reference different versions of the same component, contained in the same assembly, with this expanded use of the extern keyword.

o Property Accessor Accessibility
It is now possible to define different levels of accessibility for the get and set accessors on properties.

o Covariance and Contravariance in Delegates
The method passed to a delegate may now have greater flexibility in its return type and parameters.

o How to: Declare, Instantiate, and Use a Delegate
Method group conversion provides a simplified syntax for declaring delegates.

o Fixed Size Buffers
In an unsafe code block, it is now possible to declare fixed-size structures with embedded arrays.

o Friend Assemblies
Assemblies can provide access to non-public types to other assemblies.

o Inline warning control
The #pragma warning directive may be used to disable and enable certain compiler warnings.

o volatile
The volatile keyword can now be applied to IntPtr and UIntPtr.

Thanks to the various links by Microsoft for the above info.
http://msdn2.microsoft.com/en-us/library/ms228038.aspx
http://msdn2.microsoft.com/en-us/library/ms228211.aspx
http://msdn2.microsoft.com/en-us/library/ms228097.aspx
http://msdn2.microsoft.com/en-us/library/7cz8t42e.aspx


Create Rich Internet apps with Macromedia Flash MX (ver 6)

September 13, 2005

PORTABLE APPLICATIONS: * Flash MX ActionScript allows you to create Flash movies that are “device aware.”
BANDWIDTH-SENSITIVE APPLICATIONS: * One of the long-standing benefits of using Flash movies for Web content is the fact that SWF files can be incredibly small.
CUSTOM MEDIA PLAYERS: * With the new capabilities, you can effectively create stand-alone media players using a Flash movie as the “shell,” or skin, that provides the user interface.
Flash MX allows you to publish stand-alone projectors (as EXE or Mac APPL files) that do not require the use of a Web browser with the Flash Player plug-in. You can distribute these projectors on floppy disks, CD-ROMs, DVDROMs, or as downloads from your Web site.
SKINS: * One of the most flexible options for all Flash UI components is the ability to “skin,” or modify, the physical appearance of the component instances within your Flash document. Note: Skins on other “windows” applications can be created using XUL script (like XML) which would be rendered using the Gecko engine.Gecko runs today on Win32 (Windows 95, Windows 98, Windows NT 4, Windows 2000), PowerMac, and Linux and its getting ported to other o/s’s pretty fast.
WEB SERVICES: * Flash MX can communicate with Web Services using Flash Remoting, which is included with ColdFusion MX and available as an add-on for ASP.NET and J2EE servers.Flash Remoting makes it easy for Flash to connect to Web Services and to other server-side components and databases.
* Develop complex interface elements (as components) with less hassle.
Now you can quickly drag scrollbars,window panes, push buttons, and radio buttons onto the stage of your Flash documents. These elements are called Flash UI components, and are installed automatically with the program.
* Play streaming video [.MOV, .AVI, .MPEG, .WMV (windows only) ]
(and audio) content delivered by a Web site running Flash Communication Server MX (aka FlashCom). And when we say, “streaming,” we mean streaming—we’re not talking about SWF files that contain embedded video content. Using FlashCom or third-party utilities like Sorenson Squeeze or Wildform Flix, you can record or convert digital video into Flash Video files, or FLV files. FlashCom can publish FLV files to several connected users in real time.
* Integrate a wide range of media content, from JPEG and GIF files to EPS, FreeHand, and Fireworks (PNG file) documents.
* Display JPEG image files at runtime. Flash Player 6 can download standard JPEG images directly into Flash movies (SWF files) as they play.
* Play embedded video content. Flash MX now allows you to import digital video files into a Flash document (FLA file). During import, the digital video is recompressed with the Sorenson Spark codec. This proprietary codec is built into Flash Player 6,Web users do not need to download additional plug-ins such as Apple QuickTime or Real Systems Real One Player to view the video content.
* Embed fonts that display on any supported system. Flash movies (SWF files) can embed characters from specific typefaces that you use in the Flash authoring document (FLA file). Once embedded, these characters will be seen when any user views the Flash movie.
* Integrate remote data from application servers, such as Macromedia Cold-Fusion Server MX or Microsoft ASP.NET. Remote data can be formatted in several ways, from URL form-encoded name/value pairs to standard XML.
The MX family of products introduces a new data format, AMF (Action
Message Format), which is used by Flash Remoting services built into Cold-Fusion Server MX. Now you can send and receive binary data to and from the Flash Player.
Terminology:
* A SWF file is the Flash movie file that is published from your Flash document in Flash MX.
* The project file that you create within Macromedia Flash MX is called a Flash document and has an .fla file extension, such as main.fla. FLA files are not uploaded to your Web server for final production and delivery to your target audience.
* An SWD file is created by Flash MX when you choose Control > Debug Movie to test your movie in the Flash MX authoring environment.
* APPL/EXE: With Flash MX or the stand-alone Flash Player 6, you can create a self-running
version of your Flash movie. This type of movie is called a projector, or a standalone.
Essentially, the Flash movie and the Flash Player engine are combined into one file: an EXE file for Windows playback, or an APPL file (as the file creator type) for Macintosh. Because the projector file contains the Flash Player engine, you do not need a Web browser and a Flash Player 6 plug-in to view the Flash movie. You can distribute the projector files on a CD-ROM or some other type of fixed media, like DVD-ROM.
* The Flash Video format is designated with an .flv extension. FLV files are precompressed video and audio files.
* AS: You can save lines or entire blocks of ActionScript code in a text file with a .as file extension.

Thanks to the book authors at: http://www.mxbook.com/v1/toc.htm


Simple SQL Server Performance Tips

July 29, 2005
  1. Always create a data model (ERD).
  2. Consider using an application block or a best practice based design.
  3. Make sure the database is normalized – very important else sql server will not give optimized query plans (Tips for SQL Server 2005 Query Plans) . For the 1 to many (1:m OR m:1) relation, -> ensure that the child table’s primary key has one of its composite keys as the parent table’s primary key. All dependent tables must have the parent-primary-key (foreign key) and a surrogate key as its primary key eg. a Person – Address relationship, or a Product – Attribute relationship. For an m:n relation ensure that the two tables have a third table to hold the primary key combinations of both the related tables eg. a many to many relationship.
  4. Make sure database security is controlled through views/stored procedures and finally roles.
  5. All commonly used joins have indexes on the where condition columns. Remember foreign key constraint doesn’t mean an index.
  6. Always use Inner Joins if possible then Outer Joins . Use Left Outer joins only when foreign keys are nullable. Try to design around NULL (avoid foreign keys being NULL). Use ANSI_NULL to ensure ANSI NULL compatibility. Remember: SELECT * FROM A1 where b not in (SELECT b from B1) would return null if any b is null.
  7. Keep transactions as short as possible.
  8. Reduce lock time. Try to develop your application so that it grabs locks at the latest possible time, and then releases them at the very earliest time.
  9. Always run/display execution plan from query analyzer when testing out stored procs/ad-hoc sql and ensure clustered index seek or nested loops are used. NO HASH JOINs. I/O or hash joins would mean spikes in CPU usage in the performance monitor(perfmon).
  10. Avoid where conditions with functions since SQL Server doesn’t have Function based indices. eg don’t use select a,b from X where CONVERT(date) > ’10/10/2005′, instead move the convert to the RHS constant. This guarantees query exec. plan reuse and also usage of index columns by query plan.
  11. Always run sql profiler and run your client application and ensure that the duration column is not too much, if too much run index tuning wizard which will confirm that no indices are required for the queries.
  12. Always use connection pools for guaranteeing caching of queries results etc. Connection Strings should exactly match for connection pooling, if NT USer use same user while connecting to the database from the client. Remember: NT based connection pooling through delegation doesn’t work correctly in ASP.NET, also it isn’t as scalable as a SQL user based connection pool. You can always encrypt the connection string in the web.config file
  13. SQL Server .NET data provider is the fastest. The SQL Server .NET provider uses TDS (Tabular Data Stream, which is the native SQL Server data format) to communicate with SQL Server. The SQL Server .NET provider can be used to connect to SQL Server 7.0 and SQL Server 2000 databases, but not SQL Server 6.5 databases. If you need to connect to a SQL Server 6.5 database, the best overall choice is the OLE DB.NET data provider.
  14. 2 part name – Always use fully qualify tables/views/stored procs like exec dbo.sp_storeusers or sp_sqlexec rsdb.dbo.sp_storeusers to be compatible with future releases of SQL Server.
  15. SQL Server 2005 places no limits on server RAM, supports XML natively, has an inbuilt tuning advisor and works with the same sql syntax as SQL Server 2000. Constant Scan and other operators of SQL 2005.
  16. Server side cursors are not scalable in SQL Server => avoid .
  17. Cursors are degradable to the next higher cost cursor – when ORDER BY (not covered by index), TOP, GROUP BY, UNION, DISTINCT,.. is used.
  18. Always use DataReaders, then DataTables then DataSets with ADO.NET in that order of performance hit.
  19. Try to use SELECT … (with NOLOCK) hint. NOLOCK gives dirty data, useful only when readers are much more than writers. If appropriate, reduce lock escalation by using the ROWLOCK or PAGLOCK. Consider using the NOLOCK hint to prevent locking if the data being locked is not modified often.
  20. Always de-allocate and close cursors, close connections.
  21. To check io costs – set statistics io on — just get stats for touches on the tables (could be index, clustered index or table)
  22. Non-clustered index leads to a bookmark look-up when the clustered index/rowid data is accessed.
  23. Internationalization: Always use UTC time in database and plan for Unicode. Don’t assume locale and number of users, design for most scalability. Don’t use the NVARCHAR or NCHAR data types unless you need to store 16-bit character (Unicode) data. They take up twice as much space as VARCHAR or CHAR data types, increasing server I/O and wasting unnecessary space in your buffer cache.
  24. ADO.NET calls ad-hoc queries using sp_ExecuteSQL(“…”) so they will be cached, so no problems with search pages but use the same connection string/pooling. select * from syscacheobjects to check cache.
  25. Avoid SELECT * ==> leads to table scan, also have at least 1 clustered index on a table (unless its very small) Because there is no index on the column to use for the query. It must do a table scan to evaluate each row. A table scan is also done if all columns are requested or the where condition doesn’t contain any indices.
  26. use SET => better for assigning single values rather than SELECT eg. SET @a =10
  27. Openxml is costly – it loads the xml parser in sql server so use bulk insert/bulk copy
  28. DBCC – database consistency check (misnomer now!) DBCC FREEPROCACHE (free proc cache) DBCC REINDEX – at night, high cost, table lock, reindex, reapply fill factor which is applied only initially DBCC CHECK – check db consistency DBCC SHOWCONTIG – show defragmentation (extent level, logical scan, scan density) DBCC INDEXDEFRAG – online operation – during day, low cost, page lock, fix logical scan frag.
  29. Maintainance: update statistics every night, reindex every week.
  30. sp_who – show spids currently running and deadlocked ones
  31. Ask for less data over the wire – its better to work like explorer and ask for parent nodes first then child nodes based on user request.
  32. use of DISTINCT is not very scalable => database model error (may not be relational)
  33. Optimizer uses constraints – so use indices,foreign keys etc
  34. Clustered Index Scan or Full Table Scan are because an index is missing, use index tuning wizard with thorough to find the missing index when the application is running and when profiler is used. Index Tuning Wizard can be run on individual queries too from SQL Query Analyser.
  35. SQL Query Optimizer: Select column would affect BOOKMARK LOOKUP, Predicate column (where clause) determines clustered or non-clustered index seek/scan (scan=>between clause), Estimated Resultant rows determines a Clustered Index Scan is to be done or not.
  36. dbcc memorystatus – value of Stolen under Buffer Distribution increase steadily? => either consuming a lot of memory within SQL Server or is not releasing something. When an application acquires a lot of Stolen memory, SQL Server cannot page this to disk like it can for a data or index page. This is memory that must remain in SQL Server’s Buffer Pool and cannot be aged out. If the application is using cursors, memory associated with a cursor requires Stolen Memory while the cursor is open => Perhaps the application is opening up cursors but not closing them before opening a new one.
  37. OR/’in’ clauses are not very performant (most of the time they result in a table scan) ==> use unions for large queries.
  38. Always check for SQL Injection problems including comment web page injection issues.
  39. A view – “virtual table” – based on views would all be materialized in the tempdb during execution so the query plan used would be based on the sql (if it contains CONVERT, RTRIM functions etc in the where clause, the index wouldn’t be used because there are no function based indexes like ORACLE).
  40. Data Types: char == trailing spaces (padded), varchar == no trailing spaces (not-padded).If the text data in a column varies greatly in length, use a VARCHAR data type instead of a CHAR data type. The amount of space saved by using VARCHAR over CHAR on variable length columns can greatly reduce I/O reads, improving overall SQL Server performance. Don’t use FLOAT or REAL data types for primary keys, as they add unnecessary overhead that hurts performance. Use one of the integer data types instead.
  41. Avoid SQL Server Application Roles which do not take advantage of connection pooling
  42. Set following for all stored procs
    ========================
    SET ANSI_NULLS ON — guarantees ansi null behaviour during concat, IN operations
    SET CONCAT_NULL_YIELDS_NULL ON — any string concat with NULL is NULL
    SET NOCOUNT ON — minimize network traffic.
  43. O/RM – Object-relational mapping – Object-relational mapping, or O/RM, is a programming technique that links relational databases to object-oriented language concepts, creating (in effect) a “virtual object database.” http://en.wikipedia.org/wiki/Object-relational_mapping
  44. Simple tips from http://www.sql-server-performance.com/david_gugick_interview2.aspBest way to optimize stored procedures:
    • Limit the use of cursors wherever possible. Use temp tables or table variables instead. Use cursors for small data sets only.
    • Make sure indexes are available and used by the query optimizer. Check the execution plan for confirmation.
    • Avoid using local variables in SQL statements in a stored procedure. They are not as optimizable as using parameters.
    • Use the SET NOCOUNT ON option to avoid sending unnecessary data to the client.
    • Keep transactions as short as possible to prevent unnecessary locking.
    • If your application allows, use the WITH (NOLOCK) table hint in SQL SELECT statements to avoid generating read locks. This is particularly helpful with reporting applications.
    • Format and comment stored procedure code to allow others to properly understand the logic of the procedure.
    • If you are executing dynamic SQL use SP_EXECUTESQL instead of EXEC. It allows for better optimization and can be used with parameters.
    • Access tables across all stored procedures in the same logical order to prevent deadlocks from occurring.
    • Avoid non-optimizable SQL search arguments like Not Equal, Not Like, and, Like ‘%x’.
    • Use SELECT TOP n [PERCENT] instead of SET ROWCOUNT n to limit the number of rows returned.
    • Avoid using wildcards such as SELECT * in stored procedures (or any SQL application for that matter).
    • When executing stored procedures from a client, using ADO for example, avoid requesting a refresh of the parameters for the stored procedure using the Parameters.Refresh() command. This command forces ADO to interrogate the database for the procedure’s parameters and causes excessive traffic and application slowdowns.
    • Break large queries into smaller, simpler ones. Use table variables or temp tables for temporary storage, if necessary.
    • Understand your chosen client library (DB-LIB, ODBC, OLE DB, ADO, ADO.Net, etc.) Understand the necessary options to set to make queries execute as quickly as possible.
    • If your stored procedure generates one or more result sets, fetch those results immediately from the client to prevent prolonged locking. This is especially important if your client library is set to use server-side cursors.
    • Do not issue an ORDER BY clause in a SELECT statement if the order of rows returned is not important.
    • Put all DDL statements (like CREATE TABLE) before any DML statements (like INSERT). This helps prevent unwanted stored procedure recompiles.
    • Only use query hints if necessary. Query hints may help performance, but can prevent SQL Server from choosing the best execution plan. A query hint that works today may not work as well tomorrow if the underlying data changes in size or statistical distribution. Try not to out think SQL Server’s query processor.
    • Consider using the SQL Server query governor cost limit option to prevent potentially long running queries from ever executing.

    Best index tuning:

    • Examine queries closely and keep track of column joins and columns that appear in WHERE clauses. It’s easiest to do this at query creation time.
    • Look for queries that return result sets based on ranges of one or more columns and consider those columns for the clustered index.
    • Avoid creating clustered primary keys if the PK is on an IDENTITY or incrementing DATETIME column. This can create hot-spots at the end of a table and cause slow inserts if the table is “write” heavy.
    • Avoid excessive indexes on columns whose statistical distribution indicates poor selectivity, i.e. values found in a large number of rows, like gender (SQL Server will normally do a table scan in this case).
    • Avoid excessive indexes on tables that have a high proportion of writes vs. reads.
    • Run the Index Tuning Wizard on a Coefficient trace file or Profiler trace file to see if you missed any existing indexes.
    • Do not totally rely on the Index Tuning Wizard. Rely on your understanding of the queries executed and the database.
    • If possible, make sure each table has a clustered index, which may be declared in the primary key constraint (if you are using a data modeling tool, check the tool’s documentation on how to create a clustered PK).
    • Indexes take up extra drive space, slow down INSERTs and UPDATEs slightly, and require longer backup/replication times, but since most tables have a much higher proportion of reads to writes, you can usually increase overall performance creating the necessary indexes, as opposed to not creating them.
    • Remember that the order of columns in a multi-column index is important. A query must make use of the columns as they are listed in the index to get the most performance increase. While you don’t need to use all columns, you cannot skip a column in the index and still receive index performance enhancement on that column.
    • Avoid creating unique indexes on columns that allow NULL values.
    • On tables whose writes far outweigh reads, consider changing the FILLFACTOR during index creation to a value that allows for adequate free space on the index pages to allow for optimal table inserts.
    • Make sure SQL Server is configured to auto update and auto create statistics. If these options cause undue strain on the server during business hours and you turn them off, make sure you manually update statistics, as needed. Also, note that sql server trace does cause a strain and slowdown on the server.
    • Consider rebuilding indexes on a periodic basis, by recreating them (consider using the DROP_EXISTING clause), using DBCC INDEXDEFRAG (SQL 2000), or DBCC DBREINDEX. These commands defragment an index and return the fill factor space to the leaf level of each index page. Consider a mix/match of each of these commands for your environment.
    • Do not create indexes that contain the same column. For example, instead of creating two indexes on LastName, FirstName and LastName, eliminate the second index on LastName.
    • Avoid creating indexes on descriptive CHAR, NCHAR, VARCHAR, and NVARCHAR columns that are not accessed often. These indexes can be quite large. If you need an index on a descriptive column, consider using an indexed view on a smaller, computed portion of the column. For example, create a view:
      CREATE VIEW view_nameWITH SCHEMABINDINGASSELECT ID, SUBSTRING(col, 1, 10) as colFROM table     
      
      Then create an index on the reduced-sized column col:     
      
      CREATE INDEX name on view_name (col). This index can still be used by SQL Server when querying the table directly (although you would be limited in this example to searching for the first 10 characters only). Note: Indexed views are SQL Server 2000 only.
    • Use surrogate keys, like IDENTITY columns, for as many primary keys as possible. INT and BIGINT IDENTITY columns are smaller than corresponding alpha-numeric keys, have smaller corresponding indexes, and allow faster querying and joining.
    • If a column requires consistent sorting (ascending or descending order) in a query, for example:
      SELECT LastName, FirstNameFROM CustomersWHERE LastName LIKE N%ORDER BY LastName DESC     
      
      Consider creating the index on that column in the same order, for example:     
      
      CREATE CLUSTERED INDEX lastname_ndxON customers(LastName, FirstName) DESC. This prevents SQL Server from performing an additional sort on the data.
    • Create covering indexes wherever possible. A covering index covers all columns selected and referenced in a query. This eliminates the need to go to the data pages, since all the information is available in the index itself.

    Benefits of using stored procedures

    • Stored procedures facilitate code reuse. You can execute the same stored procedure from multiple applications without having to rewrite anything.
    • Stored procedures encapsulate logic to get the desired result. You can change stored procedure code without affecting clients (assuming you keep the parameters the same and don’t remove any result sets columns).
    • Stored procedures provide better security to your data. If you use stored procedures exclusively, you can remove direct Select, Insert, Update, and Delete rights from the tables and force developers to use stored procedures as the method for data access.
    • Stored procedures are a part of the database and go where the database goes (backup, replication, etc.).
    • Stored procedures improve performance. SQL Server combines multiple statements in a procedure into a unified execution plan.
    • Stored procedures reduce network traffic by preventing users from having to send large queries across the network.
    • SQL Server retains execution plans for stored procedures in the procedure cache. Execution plans are reused by SQL Server when possible, increasing performance. Note SQL 7.0/2000: this feature is available to all SQL statements, even those outside stored procedures, if you use fully qualified object names.
  45. Top 10 Must Have Features in O/R Mapping Tools at http://www.alachisoft.com/articles/top_ten.html – 1. Flexible object mapping -Tables & views mapping, Multi-table mapping, Naming convention, Attribute mapping, Auto generated columns, Read-only columns, Required columns, Validation, Formula Fields, Data type mapping, 2. Use of existing Domain objects, 3. Transactional operations – COM+/MTS,Stand-alone, 4. Relationships and life cycle management – 1 to 1, many to 1, 1 to many, many to many, 5. Object inheritance – 1 table per object or 1 table for all objects – handling insert, update, delete and load data, 6. Static and dynamic queries, 7. Stored procedure calls,8. Object caching, 9. Customization of generated code and re-engineering support, 10. Code Template Customization
  46. Perform an audit of the SQL Code http://www.sql-server-performance.com/sql_server_performance_audit8.asp
    Transact-SQL Checklist

    • Does the Transact-SQL code return more data than needed?
    • Are cursors being used when they don’t need to be?
    • Are UNION and UNION SELECT properly used?
    • Is SELECT DISTINCT being used properly?
    • Does the WHERE clause make use of indexes in search criteria?
    • Are temp tables being used when they don’t need to be?
    • Are hints being properly used in queries?
    • Are views unnecessarily being used?
    • Are stored procedures being used whenever possible?
    • Inside stored procedures, is SET NOCOUNT ON being used?
    • Do any of your stored procedures start with sp_?
    • Are all stored procedures owned by DBO, and referred to in the form of databaseowner.objectname?
    • Are you using constraints or triggers for referential integrity?
    • Are transactions being kept as short as possible?
    • Is the application using stored procedures, strings of Transact-SQL code, or using an object model, like ADO, to communicate with SQL Server?
    • What method is the application using to communicate with SQL Server: DB-LIB, DAO, RDO, ADO, .NET?
    • Is the application using ODBC or OLE DB to communication with SQL Server?
    • Is the application taking advantage of connection pooling?
    • Is the application properly opening, reusing, and closing connections?
    • Is the Transact-SQL code being sent to SQL Server optimized for SQL Server, or is it generic SQL?
    • Does the application return more data from SQL Server than it needs?
    • Does the application keep transactions open when the user is modifying data?
  47. Application Checklist

    Thanks to the authors at http://www.sql-server-performance.com/ and the other sites listed above.