0
Research Papers: Structures and Safety Reliability

Ship-to-Ship State Observer Using Sensor Fusion and the Extended Kalman Filter OPEN ACCESS

[+] Author and Article Information
Sondre Sanden Tørdal

Faculty of Engineering and Science,
University of Agder,
Jon Lilletunsvei 9,
Grimstad 4879, Norway
e-mail: sondre.tordal@uia.no

Geir Hovland

Faculty of Engineering and Science,
University of Agder,
Jon Lilletunsvei 9,
Grimstad 4879, Norway
e-mail: geir.hovland@uia.no

Contributed by the Ocean, Offshore, and Arctic Engineering Division of ASME for publication in the JOURNAL OF OFFSHORE MECHANICS AND ARCTIC ENGINEERING. Manuscript received May 21, 2018; final manuscript received September 27, 2018; published online January 17, 2019. Assoc. Editor: Nianzhong Chen.

J. Offshore Mech. Arct. Eng 141(4), 041603 (Jan 17, 2019) (9 pages) Paper No: OMAE-18-1062; doi: 10.1115/1.4041643 History: Received May 21, 2018; Revised September 27, 2018

In this paper, a solution for estimating the relative position and orientation between two ships in six degrees-of-freedom (6DOF) using sensor fusion and an extended Kalman filter (EKF) approach is presented. Two different sensor types, based on time-of-flight and inertial measurement principles, were combined to create a reliable and redundant estimate of the relative motion between the ships. An accurate and reliable relative motion estimate is expected to be a key enabler for future ship-to-ship operations, such as autonomous load transfer and handling. The proposed sensor fusion algorithm was tested with real sensors (two motion reference units (MRS) and a laser tracker) and an experimental setup consisting of two Stewart platforms in the Norwegian Motion Laboratory, which represents an approximate scale of 1:10 when compared to real-life ship-to-ship operations.

FIGURES IN THIS ARTICLE
<>

Ship-to-ship, also called vessel-to-vessel, operations are used to transfer payloads or personnel from one ship to another. Such operations can be severely limited by weather conditions, especially for smaller size ships, which can have significant weather-induced motions. Compensation systems such as dynamic positioning ((DP), see for example Refs. [1] and [2]) and active heave compensated (see for example Ref. [3]) cranes and winches can be used to extend the so-called weather window.

In order to achieve safe and accurate load transfer by operator-guided automatic control systems, or fully autonomous systems, the relative motion between the two ships must be measured in six degrees-of-freedom (6DOF) and in real-time. It is highly critical that the measurement of the relative motion between the two ships is reliable and cannot be interrupted during the operation. If this measurement is lost, it may cause severe damage to the payload, personnel, and/or the ships.

The proposed solution to the ship-to-ship measurement problem relies on the use of sensor fusion techniques to combine both inertial and visual sensors in real-time. Figure 1 is used to illustrate the problem, where two ships are laying alongside each other, and a crane is supposed to land a load onto the secondary ship's cargo deck. The figure indicates that there is a visual sensor capable of measuring the absolute pose between two coordinate frames attached to each of the ships.

In addition, there is a motion reference unit (MRU) sensor placed on each of the ships capable of measuring the body movement of each of the ships relative to their respective heading frames. The sensor fusion goal is to estimate the body-to-body pose, meaning that the heading offset has to be estimated using a sensor fusion approach since it is not directly measured by the sensors. In the case where the two ships are moored to each other or that they are controlled by some DP system, the heading offset is more or less constant or at least slowly varying. By using this information, it should be possible to lose the sight of the visual marker attached to the secondary ship for small time periods and still be operational by relying on the last estimated heading offset and the inertial measurements carried out by both the MRU sensors. It is also assumed that there exists a wireless communication link between the ships for real-time sensory data transfer. This has been experimentally investigated and validated earlier in Ref. [4]. In Sec. 2, the ship-to-ship kinematics will be elaborated in more detail, the process and measurement models will be presented, and a state estimation algorithm using the extended Kalman filter (EKF) will be presented in the end.

In order to increase the reliability of the relative motion measurement, a sensor fusion approach is proposed in this paper by utilizing two different measurement principles: (1) A visual (time-of-flight) sensor measuring directly the relative motion and (2) an MRU located on each ship. These two sensor types were selected because the visual sensor allows the system to be operated from one ship only, even if the communication with the second ship is lost. An MRU sensor on each ship was selected for additional safety, since the visual sensor may be interrupted by water spray or other obstacles, for example, the payload itself. One requirement of the sensor fusion algorithm is the ability to operate with only one sensor system and to automatically include the second sensor system when it becomes available after downtime. The use of two MRU sensors introduces drift in the estimated relative motion over time. The sensor fusion algorithm should be able to eliminate the drift when the data from the visual sensor are included.

A higher level of autonomy is expected in future offshore load handling and shipping activities in general. In Ref. [5], it is stated: Autonomous shipping is the future of the maritime industry. As disruptive as the smartphone, the smart ship will revolutionize the landscape of ship design and operations. One specific case considered in this paper is the load transfer from one ship onto another one using an offshore crane where the load is suspended from the crane tip. Today, these operations are carried out by experienced and highly skilled crane operators with long experience. It is therefore of high interest among the industry to equip these cranes with autonomous systems, which are supposed to assist the crane operator in such offshore operations. The result may be a higher level of repeatability, safety, and more efficient load transfers. As a first step toward solving this problem, the relative ship-to-ship motion estimation has to be solved in real-time.

Ship-to-Ship Kinematic Definitions.

In this section, the relative ship-to-ship kinematics will be investigated in order to form a fundamental understanding of the relative motions occurring between the two ships while situated at sea. This kinematic model will form the basis for both the process and the measurement models used for the relative state estimation problem presented later on. The notation and the rigid body ship kinematics are following Fossen's robotic like approach for hydrodynamics, see Ref. [6] for more detailed information related to these topics. The kinematic structure between the two ship bodies is illustrated by Fig. 2.

It is assumed that both ships are equipped with an MRU sensor, which is calibrated to measure the transformation between the heading/inertial frame {n} and the body fixed frame {b}. This rigid body transformation can be parameterized using the following transformations: Display Formula

(1)Rbn(Θnb)=Rz(ψ)Ry(θ)Rx(ϕ),Θnb=[ϕθψ]

where Rbn(Θnb) is the rotation matrix describing the body {b} relative to the inertial frame {n}. Velocities are usually described using body fixed velocities, at least for the rotational velocities ωb/nb; this is motivated from the fact that the time derivatives of the Euler angles Θ˙nb do not give any logical meaning to human understanding. The transformation between these two representations is given by Display Formula

(2)Θ˙nb=TΘ(Θnb)ωb/nb

where the transformation matrix TΘ(Θnb) is defined as Display Formula

(3)TΘ(Θnb)=[1sin(ϕ)sin(θ)cos(θ)cos(ϕ)sin(θ)cos(θ)0cos(ϕ)sin(ϕ)0sin(ϕ)cos(θ)cos(ϕ)cos(θ)]

Further, it is chosen to represent the translational velocities and accelerations in the inertial frame {n}, which will result in the following velocity Jacobian matrix: Display Formula

(4)[r˙b/nnΘ˙nb]η˙=[I00TΘ(Θnb)]JΘ(η)[r˙b/nnωb/nb]v
which establishes the connection between the two velocity representations η˙ and v, which are parametrized as Display Formula
(5a)η˙=[x˙,y˙,z˙,ϕ˙,θ˙,ψ˙]T
Display Formula
(5b)v=[x˙,y˙,z˙,ωx,ωy,ωz]T

The goal is to track the relative motion between the two floating ships using sensor fusion. By investigating Fig. 2, it is clear that if a sensor is capable of tracking all six states between {c} and {p}, it is possible to also determine the relative motions between {b1} and {b2} from direct calculations according to Display Formula

(6a)rb2/b1b1=rc/b1b1+Rcb1(rp/ccRpcrp/b2p)
Display Formula
(6b)Rb2b1=Rcb1RpcRb2p

where rb2/b1b1 represents the position of {b2} relative to {b1} given in {b1}, and Rb2b1 describes the orientation of {b2} relative to {b1}. The type of sensor used to track all 6DOF between {c} and {p} can vary and may be based on laser technology like the one used in Ref. [7] or by using a camera vision system together with some kind of fiducial marker like the Aruco marker presented in Ref. [8] and applied in Ref. [9]. However, this solution cannot be seen as a robust solution alone, especially not when situated in harsh offshore conditions where sea spray, varying light conditions and physical objects may interrupt the line of sight of the visual tracker for unknown time periods. In addition, this solution does not account for how the bodies are oriented with respect to gravity, which may be disadvantageous when it comes to controlling a hanging load from one ship onto another.

By extending the measurement loop to also account for the measurements carried out with an MRU sensor placed on each of the two vessels, a more robust solution to the relative tracking problem is feasible. This may even allow for periods where it is not possible to track the other ship visually using a visual tracker, which introduces redundancy compared to the previously mentioned solution. To solve this problem, the two MRUs and the visual tracker have to be combined using a sensor fusion approach. Both the system process and the measurements have to be modeled in order to apply sensor fusion methods such as the EKF or the Particle filter [10,11].

Process Model.

The process model, which describes the problem illustrated in Fig. 1, is solely modeled based on the kinematics, meaning that the dynamics due to the ship mass/inertia, damping and the external forces acting on the ship are not modeled. However, if these physical parameters were assumed to be known, the process model could most likely result in better performance compared to the given approach, which is based on the kinematics only. Each of the two ships is modeled as two individual processes where the state vector xs is defined as Display Formula

(7)xs=[ηvv˙]

where η describes the attitude and position of {b} relative to {n},v describes the velocities and v˙ the accelerations, where the details of the state vectors are given in Eqs. (4) and (5). By using Eq. (4), it is straightforward to form the ship process model as Display Formula

(8)x˙s=[JΘ(η)vv˙0]+ws

where ws is the additive ship process noise. In addition to the two ship processes, it is desired to model the heading offset between coordinates {n1} and {n2}. It is also assumed that both ships are controlled by a DP system, meaning that the offset position rn2/n1n1=[Δx,Δy,Δz]T and the orientation Rn2n1=Rz(Δψ) are assumed to be slowly varying or close to constant during operation. The heading offset state xo parameters are augmented as Display Formula

(9)xo=[ΔxΔyΔzΔψ]T

where Δx,Δy, and Δz are the positional offset, and Δψ is the heading offset angle between the two inertial coordinates {n1} and {n2}. Since the heading offset is supposed to be constant, the process model is simply Display Formula

(10)x˙o=0+wo

where wo is the process uncertainty. The complete state vector accounting for both the ships and the heading offset can be expressed as Display Formula

(11)x=[xs1xs2xo]

and the corresponding ship-to-ship process model is modeled by augmenting the two ship processes models and the heading offset process model as Display Formula

(12)x˙=[x˙s1x˙s2x˙o]=[JΘ(η1)v1v˙10JΘ(η2)v2v˙200]f(x)+[ws1ws2wo]w

where the combined state vector x is directly related to the kinematic model by

η1=[rb1/n1n1Θn1b1],v1=[r˙b1/n1n1ωb1/n1b1],v˙1=[r¨b1/n1n1ω˙b1/n1b1]η2=[rb2/n2n2Θn2b2],v2=[r˙b2/n2n2ωb2/n2b2],v˙2=[r¨b2/n2n2ω˙b2/n2b2]

where all the kinematic variables are illustrated by Fig. 2.

Measurement Model.

The measurements model establishes the relationship between the measurements carried out by the two MRU sensors zmru1,zmru2 and a 6DOF laser tracker (Leica AT960) zleica. The relative measurement between the two ships is needed to estimate the heading offset state xo. As mentioned earlier, this could be carried out by any sensor capable of tracking all 6DOF between {c} and {p}, but the Leica laser tracker is used since it offers state-of-the-art accuracy as previously investigated in Ref. [7]. The two MRU measurement models are defined as Display Formula

(13)zmru1=hmru1(x)=[η1v1],zmru2=hmru2(x)=[η2v2]

The laser tracker measurement of the pose between {c} and {p} is unfortunately not linearly dependent of the states. The relative orientation is related to the state vector x as

Rpc=(Rzyx(Θn1b1)Rcb1)T(Rz(Δψ)Rzyx(Θn2b2)Rpb2)

where Rpc is the rotation matrix from {c} to {p}. The positions of the origin of coordinates {c} and {p} are given by

rc/n1n1=rb1/n1n1+Rzyx(Θn1b1)rc/b1b1rp/n1n1=rn1/n2n1+Rn2n1(rb2/n2n2+Rzyx(Θn2b2)rp/b2b2)

The complete Leica measurement model is formed by these equations, and hence the resulting Leica measurement model is formed by Display Formula

(14)hleica(x)=[(Rzyx(Θn1b1)Rcb1)T(rp/n1n1rc/n1n1)Rp(1,1)cRp(1,2)cRp(1,3)cRp(2,1)cRp(2,2)cRp(2,3)cRp(3,1)cRp(3,2)cRp(3,3)c]

where the subscript (n, m) indicates row n and column m in rotation matrix Rpc. The measurement model is highly nonlinear and requires the Leica tracker to measure both the position and orientation of {p}. It is also worth mentioning that all the rotation matrix components are included in the measurement model even though one might consider using the quaternion approach instead to obtain fewer measurement equations.

Process and Measurement Covariances.

The ship position, velocity, and acceleration are modeled directly using rigid body kinematics, and hence no process noise is added to this part of the model. However, the ship jerk motion v˙ is assumed to be zero i.e., there exists some model uncertainty, which is added as process noise. The same applies to the offset state, where the state vector time derivative x˙o is modeled as zero and some process noise is added to account for this uncertainty. The ship and the heading offset process noise covariance matrices are defined by

Qs=σs2[012×12012×606×12I6×6],Qo=σo2I4×4

where σs and σo are tuning parameters. The combined process covariance matrix for both the ships and the heading offset states are given as

Q=[Qs1018×18018×4018×18Qs2018×404×1804×18Qo]

The measurement covariance matrices are implemented as

Rmru=[σmru,1200σmru,122]andRleica=[σleica,1200σleica,122]

where each of the standard deviations σx is tuned according to the datasheet of the MRUs and the Leica tracker, and where the datasheets did not provide sufficient information, physical experiments have been conducted to find suitable parameters for the standard deviation of each sensor output signal. The standard deviation has then been found from comparing the sensor output with the Stewart platform feedback signals, and hence the standard deviation of the specific signal is found as

σx=1Ni=1N(eiμe),e=ymys

where N is the number of data samples from the experiment, μe is the mean value of the error trend e, and ei is ith error sample, which is formed from taking the difference between the sensor measurement ym and the high precision Stewart platform feedback signal ys, which is used to benchmark the sensor precision.

It should be noted that the tuning of the process covariance Q is, in general, hard, at least when compared to the measurement covariance R. However, it may be tuned to give “satisfactory” response of the EKF algorithm by trial and error procedures. The trial-and-error tuning principle is motivated from knowing that lower process covariance compared to the measurement covariance will make the EKF to follow, or trust the model more than the sensor measurement, and vice versa if the sensor covariance is lower than the process covariance, the sensor measurements tend to dominate the EKF output. In our experimental work, the measurement convenience is more or less known due to datasheets and previous experiments, and it have therefore been decided to first find the measurement covariance matrices Rmru and Rleica and then tune the process covariance Q to yield satisfactory response of the EKF output by a trial and error procedure.

Sensor Fusion Using Extended Kalman Filter.

The numerical EKF implementation is based on a linearization of the process model given by Eq. (12) and the measurement equations given by Eqs. (13) and (14). For the discrete time implementation with time-step Ts, the discrete state transition function is given by Display Formula

(15)xk=xk1+f(xk1)Tsfk(xk1,Ts)+wk

where the process model is discretized using the first order backward Euler approximation. The EKF requires both the process and the measurement model to be linearized for each prediction and correction step according to Display Formula

(16)Fk=fkx|x̂k1|k1Hs,k=hs,kx|x̂k|k1

where x̂k1|k1 indicates the previous state estimate and x̂k|k1 is the predicted state estimate. The actual implementation of the EKF is described using Algorithm 1.

From the algorithmic implementation, it is seen that Hk,Rk, and yk change in size from each correction given that the Leica tracker is available or not. This feature enables relative ship motion tracking even when the Leica tracker loses visual sight of the tracking probe. The resulting tracking performance with and without the Leica tracker is presented in the experimental section.

The time-domain simulation model of the ship is based on the hydrodynamic modeling technique presented in Ref. [6]. The nonlinear time domain simulation is formulated as Display Formula

(17a)η˙=JΘ(η)v
Display Formula
(17b)[MRB+MA]Mv˙+Dv+Gη=τwave+τDP

where the mass matrix M is a combination of the rigid body mass MRB and the added hydrodynamic mass MA. D is the viscous damping matrix, G is the restoring forces acting on the ship body due to hydrostatic buoyancy forces. This model is a simplified model with linear approximations of the model matrices, but serves well for experimental testing of state observers and control problems. The linear approximated model matrices MRB,MA,D, and G are found from ship strip-theory or potential theory. There exist commercial software like ShipX (Veres) using strip theory [12] and WAMIT, which uses potential theory. These software solutions are capable of generating the model matrices given that the ship geometry and the accompanying physical parameters are known. In our simulation, the model matrices represent a supply ship, which is inherited from the marine system simulator (MSS) [13]. Since the ship is supposed to be controlled by a DP system, the Coriolis term C(v) is removed since the ship's heading speed is more or less equal to zero, also according to Ref. [6]. The stochastic wave forces τwave are based on a linearized version of the stochastic Pierson–Moscowitz wave spectrum [14]. The linearized stochastic wave forces are described as Display Formula

(18)τwaveKHs(s)w(s)

where K is a static gain, which is tuned to obtain desired ship response, w(s) is the zero mean unitary white noise process, H(s) represents the linearized wave spectrum, which is composed as Display Formula

(19)Hs(s)=diag(h(1)(s),,h(6)(s))

where h(n)(s) is the transfer function for each ship DOF n. The linear transfer function is given by Display Formula

(20)h(s)=2λω0σss2+2λω0s+ωo2

where the parameters λ, ω0, and σ are found from solving the following optimization problem: Display Formula

(21)minimizeλ,ω0,σ[|h(jω)|2S(ω)]T[|h(jω)|2S(ω)]

where S(ω) is the Pierson–Moscowitz wave spectrum. A comparison between the wave energy spectrum and the linearized spectrum is given in Fig. 3. Both the ships are supposed to be controlled by a dynamic positioning-system, meaning that the control objective defined as: Display Formula

(22)limte(t)=[xd(t)x(t)yd(t)y(t)ψd(t)ψ(t)]0
is supposed to be minimized using some suitable control algorithm. A system identification was applied to identify the dynamics in both the surge sway and heading directions, and the dynamics were modeled as a first-order system. The control error dynamics e(t) are therefore minimized using a proportional-derivative--control algorithm defined by Display Formula
(23)τDP=Kpe(t)+Kde˙(t)

where Kp and Kd are the controller gains, which have to be tuned to give satisfactory response. The control gains are calculated from comparing coefficients of the closed-loop characteristic polynomial p(s)=f(τ,KSS,Kp,Kd) and the desired closed-loop characteristic polynomial pd(s)=(s+pd)2.

To carry out the experimental work to validate the proposed sensor fusion algorithm, the Norwegian Motion Laboratory is used for the experimental testing, see1 and [7] for more detailed information about the experimental test setup. In Fig. 4, the lab facility is depicted to illustrate the physical size of the laboratory. The Motion Laboratory consists of two 6DOF Stewart platforms, which are used to simulate two ships moving asynchronously alongside each other. The motion trajectory used to prescribe the motions of the two platforms is generated in real-time using the simulation model presented in Sec. 3. The simulation model was realized using simulink and implemented on a real-time target, specifically the Beckhoff CX2040 Industrial personal computer. The simulink model was compiled to real-time compatible code using Beckhoff's real-time module export tool for simulink/matlab. Both Stewart platforms and the sensory equipment are connected to the CX2040 and hence the experiment was carried out using this device. A brief overview of the lab setup is given in Fig. 5.

The industrial robot placed on top of the biggest Stewart platform remained inactive during all the experiments. However, the size of the robot is used to define the scale of the whole lab setup. When comparing the maximum operational reach of the robot ≈2.5 m and MacGregors's three-dimensional compensated crane2 which is stated to be ≈25 m, the lab scale is believed to represent a scale of 1:10 when compared to a real-life ship-to-ship operation.

The figure also illustrates the real-time 6DOF laser tracker {c} and the tracking probe {p} placed onto the small platform. The two MRU sensors are placed on each of the platforms and are calibrated to measure the body movements of both the platforms {b1} and {b2} relative to their neutral position (heading frames) {n1} and {n2}. The communication/control interface of both the platforms, both the MRUs and the control unit (CX2040), are realized using an Ethernet connection together with the User Datagram Protocol (UDP). The Leica laser tracker is interfaced through a deterministic EtherCAT connectivity. The overall communication update cycle is interrupted at 5 ms and each cycle is logged to a JSON file for offline postprocessing.

The proposed sensor fusion algorithm has been realized numerically using matlab and the logged data from the experimental lab setup has been used as inputs to the algorithm. The process and measurement covariances were tuned to give a satisfactory response. It should also be mentioned that the heave motion generated from the simulation described in Sec. 3 had to be scaled down in order to not violate the maximum stroke length for the Stewart platforms. As discussed in the problem formulation, the visual sensor may be interrupted by environmental disturbances such as sea spray, fog or other obstacles; hence, Algorithm 1 is designed to also account for such events. For a total test period of 150 s, the first 50 s all three sensor measurements are used in the correction phase, in the time period t[50,100], the Leica laser tracker was deactivated to simulate that the tracker lost visual sight. In the last 50 s, the Leica tracker was activated once again to simulate that the visual sight was established after a period without sight.

To measure the tracking error, the output of the EKF filter is compared with the correct values obtained from the Motion Laboratory feedback sensors. The Motion Laboratory is previously calibrated using a high precision laser tracker to determine the calibration parameters [15]. The output of the EKF algorithm represents the position offset between the two ships given in coordinate {n1}. The mathematical representation of the EKF output is given as

ŷ=[r̂b2/b1n1Θ̂b1b2]=[r̂b1/n1n1+r̂n2/n1n1+Rz(Δψ̂)r̂b2/n2n2rotm2eul(R̂n1b1Rz(Δψ̂)R̂b2n2)]

where ·̂ indicates the estimated version of the given variable, and rotm2eul(⋅) is the function used to calculate the Euler angles from the given rotation matrix argument. The estimation error is then directly calculated using

e=yŷ=[xeyezeϕeθeψe]T

where e represents the error of the estimated EKF output ŷ when compared to the internal high precision feedback sensors of the two Stewart platforms y. The estimation error e is divided in two main parts, where the position error (ex,ey,ez) consists of the error in tracking the relative position in x-, y- and z-direction between the two bodies {b1} and {b2}. The orientation error angles (ϕe,θe,ψe) are representing the error in the relative body-to-body orientation. The error time series of the previously mentioned signals are given in Figs. 6 and 7 where the error trajectories are illustrated as a function of time. The time period t[50,100] is highlighted to show the reduced tracking performance when the Leica tracker is deactivated or unavailable to the EKF algorithm.

From analyzing the positional error time series given in Fig. 6, it is evident that the positional tracking performance is degraded significantly when the Leica tracker is unavailable to EKF algorithm presented in Algorithm 1. This result is more or less expected since the MRU sensors fitted to each of the Stewart platforms are not as accurate, when compared to the Leica laser tracker which offer state-of-the-art precision. Especially, the surge, sway, and yaw directions are more inaccurate due to the fact that the internal MRU algorithm is not capable to produce the same accuracy as for the roll, pitch, and heave signals, according to the Kongsberg's MRU datasheet [16] and from previous experiments conducted with the MRUs in the motion laboratory.

As mentioned earlier, Fig. 7 claims that the orientation angles between the two body coordinates {b1} and {b2} are not affected as much as the position error in Fig. 6 when the Leica laser tracker is unavailable to the EKF algorithm. This is again due to that both the roll and pitch angles are relative robust and accurate compared to the positional MRU output. All the experimental results of applying this test scenario are quantified in terms of root-mean-square (RMS) error eRMS, maximum absolute error |e|MAX, standard deviation σe, and mean deviation μe, where Tables 13 summarize the EKF performance for all three test periods.

In this paper, estimation of the relative motion between two ships in six degrees-of-freedom has been demonstrated by experiments. The sensor fusion algorithm is based on a discrete implementation of the extended Kalman filter using different sensor types (inertial measurements and time-of-flight). When both sensor types are available for the sensor fusion algorithm, the maximum absolute error in position was less than 10 mm with a standard deviation of less than 2 mm. The maximum orientation error was found to be less than 0.2 deg with a standard deviation less than 0.05 deg. For 50 s in the middle of the experiment, the time-of-flight sensor was made unavailable to the sensor fusion algorithm to demonstrate the reliability and the redundancy of the proposed solution. When the system was using only the two motion reference units (one on each motion platform), the maximum absolute error in position increased by a factor of five to about 50 mm with a standard deviation of less than 12 mm. The maximum orientation error increased by only 25% to about 0.25 deg with a standard deviation less than 0.08 deg. The resulting ship-to-ship tracking performance both with and without the use of the Leica tracker should still be acceptable even though if the error is multiplied with a conservative factor of 10 due to the lab scale of approximately 1:10.

The results demonstrated in this paper show that real-time sensor fusion based on the extended Kalman filter with an update cycle time of 5 ms is achievable using two standard off-the-shelf MRUs (Kongsberg/SEATEX MRU H, Trondheim, Norway), an industrial PC (Beckhoff CX2040, Verl, Germany) and laser tracker (Leica AT960, Aarau, Switzerland). The achieved accuracy was 50 mm and better in position and 0.25 deg and better in orientation, as verified by using the internal sensors in the two motion generators (Stewart platforms). The resulting sensor fusion system presented in this paper aims at supporting the higher level of autonomy expected in future marine operations. In addition, it may contribute to future development of crane controllers capable of compensating for the secondary ship motion in real-time when transporting cargo from one ship onto another. Such crane capabilities may increase the weather window where such operations are allowed today, and hence also be economically beneficial if ship-to-ship operations can be carried out more timely without the need to wait for suitable weather conditions.

In future development of the ship-to-ship state estimation problem, the relatively high-cost MRU sensors and the Leica laser tracker could be substituted with less costly sensors and the overall performance could be compared to the one presented in this work. In addition, one might consider including the two ship's physical parameters, either as static known parameters or as unknown parameters, which have to be estimated in real-time by augmenting them in the state vector.

The research presented in this paper has received funding from the Norwegian Research Council, SFI Offshore Mechatronics, project number 237896.

  • Norges Forskningsrad (237896).

Balchen, J. G. , Jenssen, N. A. , Mathisen, E. , and Sælid, S. , 1980, “A Dynamic Positioning System Based on Kalman Filtering and Optimal Control,” Model., Identif. Control, 1(3), pp. 135–163. [CrossRef]
Perez, T. , 2017, “Dynamic Positioning,” Encyclopedia of Maritime and Offshore Engineering, Wiley, Hoboken, NJ.
Kjelland, M. B. , 2016, “Offshore Wind Turbine Access Using Knuckle Boom Cranes,” Ph.D. thesis, University of Agder, Grimstad, Norway.
Tørdal, S. S. , Løvsland, P.-O. , and Hovland, G. , 2016, “Testing of Wireless Sensor Performance in Vessel-to-Vessel Motion Compensation,” 42nd Annual Conference of the IEEE Industrial Electronics Society (IECON), Florence, Italy, Oct. 23–26, pp. 654–659.
Jokioinen, E. , Poikonen, J. , Jalonen, R. , and Saarni, J. , 2016, “Remote and Autonomous Ships: The Next Steps,” AAWA Position Paper, Rolls Royce plc, London.
Fossen, T. I. , 2011, Handbook of Marine Craft Hydrodynamics and Motion Control, Wiley, Hoboken, NJ.
Tørdal, S. S. , Pawlus, W. , and Hovland, G. , 2017, “Real-Time 6-Dof Vessel-to-Vessel Motion Compensation Using Laser Tracker,” The Marine Technology Society and the IEEE Oceanic Engineering Society (OCEANS '17), Aberdeen, UK, June 19–22, pp. 1–9.
Garrido-Jurado, S. , Muoz-Salinas, R. , Madrid-Cuevas, F. , and Marn-Jimnez, M. , 2014, “Automatic Generation and Detection of Highly Reliable Fiducial Markers Under Occlusion,” Pattern Recognit., 47(6), pp. 2280–2292.
Tørdal, S. S. , and Hovland, G. , 2017, “Relative Vessel Motion Tracking Using Sensor Fusion, Aruco Markers, and Mru Sensors,” Model., Identif. Control, 38(2), pp. 79–93. [CrossRef]
Schön, T. B. , 2010, “Solving Nonlinear State Estimation Problems Using Particle Filters-an Engineering Perspective,” Division of Automatic Control, Linköping University, Linköping, Sweden.
Myhre, T. A. , and Egeland, O. , 2017, “Estimation of Crane Load Parameters During Tracking Using Expectation-Maximization,” American Control Conference (ACC), Seattle, WA, May 24–26, pp. 4556–4562.
Fossen, T. I. , and Smogeli, Ø. N. , 2004, “Nonlinear Time-Domain Strip Theory Formulation for Low-Speed Manoeuvring and Station-Keeping,” Model., Identif. Control, 25(4), pp. 201–221. [CrossRef]
Perez, T. , Smogeli, Ø. N. , Fossen, T. I. , and Sørensen, A. J. , 2006, “An Overview of the Marine Systems Simulator (mss): a Simulink Toolbox for Marine Control Systems,” Model., Identif. Control, 27(4), pp. 259–275. [CrossRef]
Pierson, W. J. , and Moskowitz, L. , 1963, “A Proposed Spectral Form for Fully Developed Wind Seas Based on the Similarity Theory of S. A. Kitaigorodskii,” J. Geophys. Res., 69(24), pp. 5181–5190. [CrossRef]
Heng, O. , and Tørdal, S. S. , 2017, “Calibration of the Norwegian Motion Laboratory Using Conformal Geometric Algebra,” Computer Graphics International Conference (CGI '17), Yokohama, Japan, June 27–30, p. 5.
Kongsberg SEATEX, 2018, “MRU H 5th Generation Datasheet,” Kongsberg SEATEX, Trondheim, Norway.
Copyright © 2019 by ASME; use license CC-BY 4.0
View article in PDF format.

References

Balchen, J. G. , Jenssen, N. A. , Mathisen, E. , and Sælid, S. , 1980, “A Dynamic Positioning System Based on Kalman Filtering and Optimal Control,” Model., Identif. Control, 1(3), pp. 135–163. [CrossRef]
Perez, T. , 2017, “Dynamic Positioning,” Encyclopedia of Maritime and Offshore Engineering, Wiley, Hoboken, NJ.
Kjelland, M. B. , 2016, “Offshore Wind Turbine Access Using Knuckle Boom Cranes,” Ph.D. thesis, University of Agder, Grimstad, Norway.
Tørdal, S. S. , Løvsland, P.-O. , and Hovland, G. , 2016, “Testing of Wireless Sensor Performance in Vessel-to-Vessel Motion Compensation,” 42nd Annual Conference of the IEEE Industrial Electronics Society (IECON), Florence, Italy, Oct. 23–26, pp. 654–659.
Jokioinen, E. , Poikonen, J. , Jalonen, R. , and Saarni, J. , 2016, “Remote and Autonomous Ships: The Next Steps,” AAWA Position Paper, Rolls Royce plc, London.
Fossen, T. I. , 2011, Handbook of Marine Craft Hydrodynamics and Motion Control, Wiley, Hoboken, NJ.
Tørdal, S. S. , Pawlus, W. , and Hovland, G. , 2017, “Real-Time 6-Dof Vessel-to-Vessel Motion Compensation Using Laser Tracker,” The Marine Technology Society and the IEEE Oceanic Engineering Society (OCEANS '17), Aberdeen, UK, June 19–22, pp. 1–9.
Garrido-Jurado, S. , Muoz-Salinas, R. , Madrid-Cuevas, F. , and Marn-Jimnez, M. , 2014, “Automatic Generation and Detection of Highly Reliable Fiducial Markers Under Occlusion,” Pattern Recognit., 47(6), pp. 2280–2292.
Tørdal, S. S. , and Hovland, G. , 2017, “Relative Vessel Motion Tracking Using Sensor Fusion, Aruco Markers, and Mru Sensors,” Model., Identif. Control, 38(2), pp. 79–93. [CrossRef]
Schön, T. B. , 2010, “Solving Nonlinear State Estimation Problems Using Particle Filters-an Engineering Perspective,” Division of Automatic Control, Linköping University, Linköping, Sweden.
Myhre, T. A. , and Egeland, O. , 2017, “Estimation of Crane Load Parameters During Tracking Using Expectation-Maximization,” American Control Conference (ACC), Seattle, WA, May 24–26, pp. 4556–4562.
Fossen, T. I. , and Smogeli, Ø. N. , 2004, “Nonlinear Time-Domain Strip Theory Formulation for Low-Speed Manoeuvring and Station-Keeping,” Model., Identif. Control, 25(4), pp. 201–221. [CrossRef]
Perez, T. , Smogeli, Ø. N. , Fossen, T. I. , and Sørensen, A. J. , 2006, “An Overview of the Marine Systems Simulator (mss): a Simulink Toolbox for Marine Control Systems,” Model., Identif. Control, 27(4), pp. 259–275. [CrossRef]
Pierson, W. J. , and Moskowitz, L. , 1963, “A Proposed Spectral Form for Fully Developed Wind Seas Based on the Similarity Theory of S. A. Kitaigorodskii,” J. Geophys. Res., 69(24), pp. 5181–5190. [CrossRef]
Heng, O. , and Tørdal, S. S. , 2017, “Calibration of the Norwegian Motion Laboratory Using Conformal Geometric Algebra,” Computer Graphics International Conference (CGI '17), Yokohama, Japan, June 27–30, p. 5.
Kongsberg SEATEX, 2018, “MRU H 5th Generation Datasheet,” Kongsberg SEATEX, Trondheim, Norway.

Figures

Grahic Jump Location
Fig. 1

Illustration of the relative ship motion tracking problem where two ships are laying alongside each other and a suspended load is supposed to be landed onto to the secondary ship deck by using an offshore crane

Grahic Jump Location
Fig. 2

Ship-to-ship body kinematics

Grahic Jump Location
Fig. 3

Comparison between the linear transfer function magnitude |h(jω)|2 and the Pierson–Moscowitz wave spectrum S(ω) with significant wave height Hs = 8 m and typical wave period Tp = 12 s

Grahic Jump Location
Fig. 4

Picture taken from the Norwegian Motion Laboratory's lab facilities located in the University of Agder's Mechatronics lab found at Campus Grimstad, Norway

Grahic Jump Location
Fig. 5

Illustration of the lab equipment used to carry out the lab experiments. The lab consist of two Stewart platforms (Bosch Rexroth EM8000 and EM1500), an industrial robot (Comau SMART 5 NJ 110 3.0), two MRU sensors (Kongsberg/SEATEX MRU H), a Leica Laser tracker (Leica AT960), and its accompanying tracking probe (T-Mac TMC30-F).

Grahic Jump Location
Fig. 6

Position estimation error when compared to the internal feedback sensors of both the Stewart platforms

Grahic Jump Location
Fig. 7

Orientation estimation error when compared to the internal feedback sensors of both the Stewart platforms

Tables

Table Grahic Jump Location
Algorithm 1 Extended Kalman filter implementation
Table Grahic Jump Location
Table 1 Extended Kalman filter sensor fusion results for t[0,50]
Table Grahic Jump Location
Table 2 Extended Kalman filter sensor fusion results for t[50,100]
Table Grahic Jump Location
Table 3 Extended Kalman filter sensor fusion results for t[100,150]

Errata

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In